Search Results: "bab"

8 April 2023

Russell Coker: Storage Trends 2023

It s been 2 years since my last blog post about storage trends [1]. Minimum Storage <=2TB In 2021 I stated that as MSY had 2TB disks for $72 and 2TB SSD for $245 it was barely worth considering a 2TB disk and anything less than 2TB wasn t worth considering. Now for 2TB storage from MSY NVMe starts at $129, SATA SSD starts at $143, and hard disks start at $75. I guess that NVMe is slightly cheaper due to some combination of economies of scale for manufacture/sales and having less postage costs. It really doesn t make sense to consider hard disks for storing 2TB or less. For storage for a small system (PC or laptop) the cheapest storage device is $19 for a 128G SATA SSD. But it wouldn t make sense to buy that when you can get a 256G SATA SSD for $22 or a 240G NVMe device for $23, saving $3 on storage wouldn t make any sense. For 512G of storage the prices are $32 for NVMe and $33 for SATA SSD. For 1TB of storage the prices start at $68 for SATA SSD and $74 for NVMe. Probably for the vast majority of home users 1TB of SATA SSD or NVMe is the minimum storage capacity to consider, the $50 price difference isn t much when considering the entire price of a PC or laptop and anything less than 1TB will run out quickly with modern use. Larger Storage 4TB+ The price for 4TB of storage from MSY is NVMe starting at $349, SATA SSD starting at $369, and hard disks starting at $115. If you need 4TB of RAID-1 storage then it might be worth saving $470 and getting hard drives for a home user. For business use it wouldn t make sense. Some laptops have two NVMe sockets so 8TB of storage (or 4TB of RAID-1) in a laptop would be interesting. For 8TB of storage the MSY prices are SATA SSD for $739 and hard drives starting at $179. Probably hard drives are the best choice for most situations where there is a need to store 8TB or more of data. But the prices are low enough to make 8TB SSD something that can be considered for home use, it doesn t seem that long ago that the 4TB hard drives I bought for my home server were almost that expensive. Big Storage MSY doesn t have 8TB NVMe, such devices are on eBay for $1700 for regular M.2 NVMe and just under $1000 for U.2 (server hot-swap devices). So if you need more than 8TB of NVMe storage then probably buying a server with U.2 built in is the correct solution. For home users who need more than 8TB of storage hard drives are a good solution. One issue is that the more affordable and larger drives use Shingled Magnetic Recording (SMR) which has some different performance characteristics for certain workloads. Apparently SMR performs badly for anything other than large file storage. Why MSY? I primarily used MSY prices for this post because they are a reliable local store that has a list of prices that is easy to read. For everything in this post I can get better prices by using eBay, the StaticIce.com.au price comparison site [2], and the computing section of the OzBargain site that gamifies finding good prices [3]. But a good shopping strategy nowadays is to compare prices in a store to determine what items are in your price range and then shop around for price on the item you want. Checking all the different bargain sites for all these items would take much more time than I want to spend writing a blog post! Conclusion Hard drives don t make sense for the vast majority of systems. Not for laptops, not for typical desktop PCs, and not for small business servers (say 8TB or less of RAID storage). Hard drives only make sense for dozens or hundreds of TB of storage and even then finding out how to deal with SMR issues is going to increase the pain of deployment. Maybe using a combination of SSD and hard drives to deal with the SMR issues is going to be a competitive advantage for NAS vendors in future. NVMe looks like it s on the way to being cheaper than SATA SSD. There is likely to be a good market for systems with NVMe as the only internal storage option. The long term trend of systems without DVD drives and with maybe 2.5 SATA devices but no 3.5 SATA devices seems to lead to the GPU being the major part that needs to fit into a PC case that determines the overall size. Maybe there will be a new trend of GPUs connected to riser cards so they can be parallel to the motherboard for compact PCs. For business desktop systems (IE low powered graphics hardware as it s not for gaming) I expect that the trend will be towards NUC type devices which are already based around the M.2 as a storage device size.

6 April 2023

Reproducible Builds: Reproducible Builds in March 2023

Welcome to the March 2023 report from the Reproducible Builds project. In these reports we outline the most important things that we have been up to over the past month. As a quick recap, the motivation behind the reproducible builds effort is to ensure no malicious flaws have been introduced during compilation and distributing processes. It does this by ensuring identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. If you are interested in contributing to the project, please do visit our Contribute page on our website.

News There was progress towards making the Go programming language reproducible this month, with the overall goal remaining making the Go binaries distributed from Google and by Arch Linux (and others) to be bit-for-bit identical. These changes could become part of the upcoming version 1.21 release of Go. An issue in the Go issue tracker (#57120) is being used to follow and record progress on this.
Arnout Engelen updated our website to add and update reproducibility-related links for NixOS to reproducible.nixos.org. [ ]. In addition, Chris Lamb made some cosmetic changes to our presentations and resources page. [ ][ ]
Intel published a guide on how to reproducibly build their Trust Domain Extensions (TDX) firmware. TDX here refers to an Intel technology that combines their existing virtual machine and memory encryption technology with a new kind of virtual machine guest called a Trust Domain. This runs the CPU in a mode that protects the confidentiality of its memory contents and its state from any other software.
A reproducibility-related bug from early 2020 in the GNU GCC compiler as been fixed. The issues was that if GCC was invoked via the as frontend, the -ffile-prefix-map was being ignored. We were tracking this in Debian via the build_path_captured_in_assembly_objects issue. It has now been fixed and will be reflected in GCC version 13.
Holger Levsen will present at foss-north 2023 in April of this year in Gothenburg, Sweden on the topic of Reproducible Builds, the first ten years.
Anthony Andreoli, Anis Lounis, Mourad Debbabi and Aiman Hanna of the Security Research Centre at Concordia University, Montreal published a paper this month entitled On the prevalence of software supply chain attacks: Empirical study and investigative framework:
Software Supply Chain Attacks (SSCAs) typically compromise hosts through trusted but infected software. The intent of this paper is twofold: First, we present an empirical study of the most prominent software supply chain attacks and their characteristics. Second, we propose an investigative framework for identifying, expressing, and evaluating characteristic behaviours of newfound attacks for mitigation and future defense purposes. We hypothesize that these behaviours are statistically malicious, existed in the past, and thus could have been thwarted in modernity through their cementation x-years ago. [ ]

On our mailing list this month:
  • Mattia Rizzolo is asking everyone in the community to save the date for the 2023 s Reproducible Builds summit which will take place between October 31st and November 2nd at Dock Europe in Hamburg, Germany. Separate announcement(s) to follow. [ ]
  • ahojlm posted an message announcing a new project which is the first project offering bootstrappable and verifiable builds without any binary seeds. That is to say, a way of providing a verifiable path towards trusted software development platform without relying on pre-provided binary code in order to prevent against various forms of compiler backdoors. The project s homepage is hosted on Tor (mirror).

The minutes and logs from our March 2023 IRC meeting have been published. In case you missed this one, our next IRC meeting will take place on Tuesday 25th April at 15:00 UTC on #reproducible-builds on the OFTC network.
and as a Valentines Day present, Holger Levsen wrote on his blog on 14th February to express his thanks to OSUOSL for their continuous support of reproducible-builds.org. [ ]

Debian Vagrant Cascadian developed an easier setup for testing debian packages which uses sbuild s unshare mode along and reprotest, our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. [ ]
Over 30 reviews of Debian packages were added, 14 were updated and 7 were removed this month, all adding to our knowledge about identified issues. A number of issues were updated, including the Holger Levsen updating build_path_captured_in_assembly_objects to note that it has been fixed for GCC 13 [ ] and Vagrant Cascadian added new issues to mark packages where the build path is being captured via the Rust toolchain [ ] as well as new categorisation for where virtual packages have nondeterministic versioned dependencies [ ].

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including: In addition, Vagrant Cascadian filed a bug with a patch to ensure GNU Modula-2 supports the SOURCE_DATE_EPOCH environment variable.

Testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In March, the following changes were made by Holger Levsen:
  • Arch Linux-related changes:
    • Build Arch packages in /tmp/archlinux-ci/$SRCPACKAGE instead of /tmp/$SRCPACKAGE. [ ]
    • Start 2/3 of the builds on the o1 node, the rest on o2. [ ]
    • Add graphs for Arch Linux (and OpenWrt) builds. [ ]
    • Toggle Arch-related builders to debug why a specific node overloaded. [ ][ ][ ][ ]
  • Node health checks:
    • Detect SetuptoolsDeprecationWarning tracebacks in Python builds. [ ]
    • Detect failures do perform chdist calls. [ ][ ]
  • OSUOSL node migration.
    • Install megacli packages that are needed for hardware RAID. [ ][ ]
    • Add health check and maintenance jobs for new nodes. [ ]
    • Add mail config for new nodes. [ ][ ]
    • Handle a node running in the future correctly. [ ][ ]
    • Migrate some nodes to Debian bookworm. [ ]
    • Fix nodes health overview for osuosl3. [ ]
    • Make sure the /srv/workspace directory is owned by by the jenkins user. [ ]
    • Use .debian.net names everywhere, except when communicating with the outside world. [ ]
    • Grant fpierret access to a new node. [ ]
    • Update documentation. [ ]
    • Misc migration changes. [ ][ ][ ][ ][ ][ ][ ][ ]
  • Misc changes:
    • Enable fail2ban everywhere and monitor it with munin [ ].
    • Gracefully deal with non-existing Alpine schroots. [ ]
In addition, Roland Clobus is continuing his work on reproducible Debian ISO images:
  • Add/update openQA configuration [ ], and use the actual timestamp for openQA builds [ ].
  • Moved adding the user to the docker group from the janitor_setup_worker script to the (more general) update_jdn.sh script. [ ]
  • Use the (short-term) reproducible source when generating live-build images. [ ]

diffoscope development diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats as well. This month, Mattia Rizzolo released versions 238, and Chris Lamb released versions 239 and 240. Chris Lamb also made the following changes:
  • Fix compatibility with PyPDF 3.x, and correctly restore test data. [ ]
  • Rework PDF annotation handling into a separate method. [ ]
In addition, Holger Levsen performed a long-overdue overhaul of the Lintian overrides in the Debian packaging [ ][ ][ ][ ], and Mattia Rizzolo updated the packaging to silence an include_package_data=True [ ], fixed the build under Debian bullseye [ ], fixed tool name in a list of tools permitted to be absent during package build tests [ ] and as well as documented sending out an email upon [ ]. In addition, Vagrant Cascadian updated the version of GNU Guix to 238 [ and 239 [ ]. Vagrant also updated reprotest to version 0.7.23. [ ]

Other development work Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

3 April 2023

Russ Allbery: Review: The Nordic Theory of Everything

Review: The Nordic Theory of Everything, by Anu Partanen
Publisher: Harper
Copyright: 2016
Printing: June 2017
ISBN: 0-06-231656-7
Format: Kindle
Pages: 338
Anu Partanen is a Finnish journalist who immigrated to the United States. The Nordic Theory of Everything, subtitled In Search of a Better Life, is an attempt to explain the merits of Finnish approaches to government and society to a US audience. It was her first book. If you follow US policy discussion at all, you have probably been exposed to many of the ideas in this book. There was a time when the US left was obsessed with comparisons between the US and Nordic countries, and while that obsession has faded somewhat, Nordic social systems are still discussed with envy and treated as a potential model. Many of the topics of this book are therefore predictable: parental leave, vacation, health care, education, happiness, life expectancy, all the things that are far superior in Nordic countries than in the United States by essentially every statistical measure available, and which have been much-discussed. Partanen brings two twists to this standard analysis. The first is that this book is part memoir: she fell in love with a US writer and made the decision to move to the US rather than asking him to move to Finland. She therefore experienced the transition between social and government systems first-hand and writes memorably on the resulting surprise, trade-offs, anxiety, and bafflement. The second, which I've not seen previously in this policy debate, is a fascinating argument that Finland is a far more individualistic country than the United States precisely because of its policy differences.
Most people, including myself, assumed that part of what made the United States a great country, and such an exceptional one, was that you could live your life relatively unencumbered by the downside of a traditional, old-fashioned society: dependency on the people you happened to be stuck with. In America you had the liberty to express your individuality and choose your own community. This would allow you to interact with family, neighbors, and fellow citizens on the basis of who you were, rather than on what you were obligated to do or expected to be according to old-fashioned thinking. The longer I lived in America, therefore, and the more places I visited and the more people I met and the more American I myself became the more puzzled I grew. For it was exactly those key benefits of modernity freedom, personal independence, and opportunity that seemed, from my outsider s perspective, in a thousand small ways to be surprisingly missing from American life today. Amid the anxiety and stress of people s daily lives, those grand ideals were looking more theoretical than actual.
The core of this argument is that the structure of life in the United States essentially coerces dependency on other people: employers, spouses, parents, children, and extended family. Because there is no universally available social support system, those relationships become essential for any hope of a good life, and often for survival. If parents do not heavily manage their children's education, there is a substantial risk of long-lasting damage to the stability and happiness of their life. If children do not care for their elderly parents, they may receive no care at all. Choosing not to get married often means choosing precarity and exhaustion because navigating society without pooling resources with someone else is incredibly difficult.
It was as if America, land of the Hollywood romance, was in practice mired in a premodern time when marriage was, first and foremost, not an expression of love, but rather a logistical and financial pact to help families survive by joining resources.
Partanen contrasts this with what she calls the Nordic theory of love:
What Lars Tr g rdh came to understand during his years in the United States was that the overarching ambition of Nordic societies during the course of the twentieth century, and into the twenty-first, has not been to socialize the economy at all, as is often mistakenly assumed. Rather the goal has been to free the individual from all forms of dependency within the family and in civil society: the poor from charity, wives from husbands, adult children from parents, and elderly parents from their children. The express purpose of this freedom is to allow all those human relationships to be unencumbered by ulterior motives and needs, and thus to be entirely free, completely authentic, and driven purely by love.
She sees this as the common theme through most of the policy differences discussed in this book. The Finnish approach is to provide neutral and universal logistical support for most of life's expected challenges: birth, child-rearing, education, health, unemployment, and aging. This relieves other social relations family, employer, church of the corrosive strain of dependency and obligation. It also ensures people's basic well-being isn't reliant on accidents of association.
If the United States is so worried about crushing entrepreneurship and innovation, a good place to start would be freeing start-ups and companies from the burdens of babysitting the nation s citizens.
I found this fascinating as a persuasive technique. Partanen embraces the US ideal of individualism and points out that, rather than being collectivist as the US right tends to assume, Finland is better at fostering individualism and independence because the government works to removes unnecessary premodern constraints on individual lives. The reason why so many Americans are anxious and frantic is not a personal failing or bad luck. It's because the US social system is deeply hostile to healthy relationships and individual independence. It demands a constant level of daily problem-solving and crisis management that is profoundly exhausting, nearly impossible to navigate alone, and damaging to the ideal of equal relationships. Whether this line of argument will work is another question, and I'm dubious for reasons that Partanen (probably wisely) avoids. She presents the Finnish approach as a discovery that the US would benefit from, and the US approach as a well-intentioned mistake. I think this is superficially appealing; almost all corners of US political belief at least give lip service to individualism and independence. However, advocates of political change will eventually need to address the fact that many US conservatives see this type of social coercion as an intended feature of society rather than a flaw. This is most obvious when one looks at family relationships. Partanen treats the idea that marriage should be a free choice between equals rather than an economic necessity as self-evident, but there is a significant strain of US political thought that embraces punishing people for not staying within the bounds of a conservative ideal of family. One will often find, primarily but not exclusively among the more religious, a contention that the basic unit of society is the (heterosexual, patriarchal) family, not the individual, and that the suffering of anyone outside that structure is their own fault. Not wanting to get married, be the primary caregiver for one's parents, or abandon a career in order to raise children is treated as malignant selfishness and immorality rather than a personal choice that can be enabled by a modern social system. Here, I think Partanen is accurate to identify the Finnish social system as more modern. It embraces the philosophical concept of modernity, namely that social systems can be improved and social structures are not timeless. This is going to be a hard argument to swallow for those who see the pressure towards forming dependency ties within families as natural, and societal efforts to relieve those pressures as government meddling. In that intellectual framework, rather than an attempt to improve the quality of life, government logistical support is perceived as hostility to traditional family obligations and an attempt to replace "natural" human ties with "artificial" dependence on government services. Partanen doesn't attempt to have that debate. Two other things struck me in this book. The first is that, in Partanen's presentation, Finns expect high-quality services from their government and work to improve it when it falls short. This sounds like an obvious statement, but I don't think it is in the context of US politics, and neither does Partanen. She devotes a chapter to the topic, subtitled "Go ahead: ask what your country can do for you." This is, to me, one of the most frustrating aspects of US political debate. Our attitude towards government is almost entirely hostile and negative even among the political corners that would like to see government do more. Failures of government programs are treated as malice, malfeasance, or inherent incompetence: in short, signs the program should never have been attempted, rather than opportunities to learn and improve. Finland had mediocre public schools, decided to make them better, and succeeded. The moment US public schools start deteriorating, we throw much of our effort into encouraging private competition and dismantling the public school system. Partanen doesn't draw this connection, but I see a link between the US desire for market solutions to societal problems and the level of exhaustion and anxiety that is so common in US life. Solving problems by throwing them open to competition is a way of giving up, of saying we have no idea how to improve something and are hoping someone else will figure it out for a profit. Analyzing the failures of an existing system and designing incremental improvements is hard and slow work. Throwing out the system and hoping some corporation will come up with something better is disruptive but easy. When everyone is already overwhelmed by life and devoid of energy to work on complex social problems, it's tempting to give up on compromise and coalition-building and let everyone go their separate ways on their own dime. We cede the essential work of designing a good society to start-ups. This creates a vicious cycle: the resulting market solutions are inevitably gated by wealth and thus precarious and artificially scarce, which in turn creates more anxiety and stress. The short-term energy savings from not having to wrestle with a hard problem is overwhelmed by the long-term cost of having to navigate a complex and adversarial economic relationship. That leads into the last point: schools. There's a lot of discussion here about school quality and design, which I won't review in detail but which is worth reading. What struck me about Partanen's discussion, though, is how easy the Finnish system is to use. Finnish parents just send their kids to the most convenient school and rarely give that a second thought. The critical property is that all the schools are basically fine, and therefore there is no need to place one's child in an exceptional school to ensure they have a good life. It's axiomatic in the US that more choice is better. This is a constant refrain in our political discussion around schools: parental choice, parental control, options, decisions, permission, matching children to schools tailored for their needs. Those choices are almost entirely absent in Finland, at least in Partanen's description, and the amount of mental and emotional energy this saves is astonishing. Parents simply don't think about this, and everything is fine. I think we dramatically underestimate the negative effects of constantly having to make difficult decisions with significant consequences, and drastically overstate the benefits of having every aspect of life be full of major decision points. To let go of that attempt at control, however illusory, people have to believe in a baseline of quality that makes the choice less fraught. That's precisely what Finland provides by expecting high-quality social services and working to fix them when they fall short, an effort that the United States has by and large abandoned. A lot of non-fiction books could be turned into long articles without losing much substance, and I think The Nordic Theory of Everything falls partly into that trap. Partanen repeats the same ideas from several different angles, and the book felt a bit padded towards the end. If you're already familiar with the policy comparisons between the US and Nordic countries, you will have seen a lot of this before, and the book bogs down when Partanen strays too far from memoir and personal reactions. But the focus on individualism and eliminating dependency is new, at least to me, and is such an illuminating way to look at the contrast that I think the book is worth reading just for that. Rating: 7 out of 10

Matt Brown: Retrospective: Mar 2023

The key decision I made mid-March was to commit to pursuing ventilation monitoring as my primary product development focus. Prior to that decision, I hoped to use my writing plan to drive a breadth-first survey of the opportunities for each of my product ideas before deciding which had the best business potential to focus on first. Two factors changed my mind:
  1. As noted last month, I m finding the writing process much slower and harder than I expected the survey across all the ideas may not complete until mid-year or later!
  2. I ve realised that having begun building co2mon.nz last year, to stop work on the project at this point would leave me feeling that I had not done justice to developing the product and testing the market - seeing it to a conclusion is important to me.
This decision is an explicit choice to prioritize seeing a project through to a conclusion (successful, or otherwise) regardless of whether or not it has the highest potential of the various ideas I could invest time into. I m comfortable making that trade-off in this instance, but I am going to bound my time investment to two months. I ll evaluate at the end of May whether I m seeing sufficient traction and potential to justify continuing further with the idea. I had only one fully uninterrupted work week in March due to a combination of days out due to school trips, two LandSAR call outs and various farm maintenance tasks. April will be similarly disrupted given school holidays and a planned family trip to Brisbane. Sharpening my focus feels particularly necessary given this reality to ensure I m not spread overly thin.

Goal Scoring See last month s retrospective for a refresher on my scoring methodology.

Consulting - 4/10 Goal: Execute a series of successful consulting engagements, building a reputation for myself and leaving happy customers willing to provide testimonials that support a pipeline of future opportunities. Consulting hours were down from February, hitting only 31% of target this month as the client didn t make use of all the hours I had allocated for them. I didn t invest any time in advertising my services or developing new clients or projects over the month, which will now become a priority for April.

Product Development - 4/10 Goal: Grow my product development skill set by taking several ideas to MVP stage with customer feedback received, and launch at least one product which generates revenue and has growth potential. With the new focus entirely on co2mon.nz, I spent a lot of time re-working and developing my thinking around how I want to take this forward, specifically trying to analyse where I saw an opportunity in the market. After attending a workshop on finding product market fit using quantifiable metrics at the Southern SaaS conference this month, I ve realised that much of the time I spent on this analysis is too insular and focused on my own observations - I need to get out and talk to a lot more people and get more feedback on their needs and understanding of the space instead. Seems obvious in retrospect! I also spent a few days beginning to build another batch of prototype CO2 monitors so I have some units to use for experimentation and testing with potential customers as I get out and have those conversations. I can probably build one or two more batches of prototype monitors before needing to look at PCB assembly in earnest.

Professional Network Development - 8/10 Goal: To build a professional relationship with at least 30 new people this year. This goal continues to be my highlight with 8 new contacts added this month and catch-ups with 4 existing people I had not spoken to for a while. I joined the KiwiSaaS central community and attended the SouthernSaas conference this month as well, which has been time well spent given the workshop learnings discussed above.

Writing - 3/10 Goal: To publish a high-quality piece of writing on this site at least once a week. I published a single post, the first half of my updated ventilation monitoring business plan. I continue to find the writing process much harder and slower than I hoped or expected and remain well below my target publishing rate, but one post is better than zero! I tested working with an editor I contracted via UpWork who provided some very useful feedback on the structure of my writing which helped to unblock some of my progress. I plan to continue doing this for at least a few more posts.

Community - 5/10 Goal: To support the growth of my local technical community by volunteering my experience and knowledge with others through activities such as mentoring, conference talks and similar.

Feedback As always, I d love to hear from you if you have thoughts or feedback triggered by anything I ve written above.

30 March 2023

Jonathan McDowell: Buttering up my storage

(TL;DR: I ve been trying out btrfs in some places instead of ext4, I ve hit absolutely zero issues and there are a few features that make me plan to use it more.) Despite (or perhaps because of) working on storage products for a reasonable chunk of my career I have tended towards a conservative approach to my filesystems. By the time I came to Linux ext2 was well established, the move to ext3 was a logical one (the joys of added journalling for faster recovery after unclean shutdowns) and for a long time my default stack has been MD raid with LVM2 on top and then ext4 as the filesystem. I ve dabbled with other filesystems; I ran XFS for a while on my VDR machine, and also when I had a large tradspool with INN, but never really had a hard requirement for it. I ve ended up adminning a machine that had JFS in the past, largely for historical reasons, but don t really remember any issues (vague recollections of NFS problems but that might just have been NFS being NFS). However. ZFS has gathered itself a significant fan base and that makes me wonder about what it can offer and whether I want that. Firstly, let s be clear that I m never going to run a primary filesystem that isn t part of the mainline kernel. So ZFS itself is out, because I run Linux. So what do I want that I can t get with ext4? Firstly, I d like data checksumming. As storage gets larger there s a bigger chance of silent data corruption and while I have backups of the important stuff that doesn t help if you don t know you need to use them. Secondly, these days I have machines running containers, VMs, or with lots of source checkouts with a reasonable amount of overlap in their data. Disk space has got cheaper, but I d still like to be able to do some sort of deduplication of common blocks. So, I ve been trying out btrfs. When I installed my desktop I went with btrfs for / and /home (I kept /boot as ext4). The thought process was that this was a local machine (so easy access if it all went wrong) and I take regular backups (so if it all went wrong I could recover). That was a year and a half ago and it s been pretty dull; I mostly forget I m running btrfs instead of ext4. This is on a machine that tracks Debian testing, so currently on kernel 6.1 but originally installed with 5.10. So it seems modern btrfs is reasonably stable for a machine that isn t driven especially hard. Good start. The fact I forget what filesystem I m running points to the fact that I m not actually doing anything special here. I get the advantage of data checksumming, but not much else. 2 things spring to mind. Firstly, I don t do snapshots. Given I run testing it might be wiser if I did take a snapshot before every apt-get upgrade, and I have a friend who does just that, but even when I ve run unstable I ve never had a machine get itself into a state that I couldn t recover so I haven t spent time investigating. I note Ubuntu has apt-btrfs-snapshot but it doesn t seem to have any updates for years. The other thing I didn t do when I installed my desktop is take advantage of subvolumes. I m still trying to get my head around exactly what I want them for, but they provide a partial replacement for LVM when it comes to carving up disk space. Instead of the separate / and /home LVs I created I could have created a single LV that would have a single btrfs filesystem on it. / and /home would then be separate subvolumes, allowing me to snapshot each individually. Quotas can also be applied separately so there s still the potential to prevent one subvolume taking all available space. Encouraged by the lack of hassle with my desktop I decided to try moving my sbuild machine over to use btrfs for its build chroots. For Reasons this is a VM kindly hosted by a friend, rather than something local. To be honest these days I would probably go for local hosting, but it works and there s no strong reason to move. The point is it s remote, and so if migrating went wrong and I had to ask for assistance I d be bothering someone who s doing me a favour as it is. The build VM is, of course, running LVM, and there was luckily some free space available. I m reasonably sure the underlying storage involves spinning rust, so I did a laborious set of pvmove commands to make sure all the available space was at the start of the PV, and created a new btrfs volume there. I was advised that while btrfs-convert would do the job it was better to create a fresh filesystem where possible. This time I did create an initial root subvolume. Configuring up sbuild was then much simpler than I d expected. My setup originally started out as a set of tarballs for the chroots that would get untarred + used for the builds, which is pretty slow. Once overlayfs was mature enough I switched to that. I d had a conversation with Enrico about his nspawn/btrfs setup, but it turned out Russ Allbery had written an excellent set of instructions on sbuild with btrfs. I tweaked my existing setup based on his details, and I was in business. Each chroot is a separate subvolume - I don t actually end up having to mount them individually, but it means that only the chroot in use gets snapshotted. For example during a build the following can be observed:
# btrfs subvolume list /
ID 257 gen 111534 top level 5 path root
ID 271 gen 111525 top level 257 path srv/chroot/unstable-amd64-sbuild
ID 275 gen 27873 top level 257 path srv/chroot/bullseye-amd64-sbuild
ID 276 gen 27873 top level 257 path srv/chroot/buster-amd64-sbuild
ID 343 gen 111533 top level 257 path srv/chroot/snapshots/unstable-amd64-sbuild-328059a0-e74b-4d9f-be70-24b59ccba121
I was a little confused about whether I d got something wrong because the snapshot top level is listed as 257 rather than 271, but digging further with btrfs subvolume show on the 2 mounted directories correctly showed the snapshot had a parent equal to the chroot, not /. As a final step I ran jdupes via jdupes -1Br / to deduplicate things across the filesystem. It didn t end up providing a significant saving unfortunately - I guess there s a reasonable amount of change between Debian releases - but I think tried it on my desktop, which tends to have a large number of similar source trees checked out. There I managed to save about 5% on /home, which didn t seem too shabby. The sbuild setup has been in place for a couple of months now, and I ve run quite a few builds on it while preparing for the freeze. So I m fairly confident in the stability of the setup and my next move is to transition my local house server over to btrfs for its containers (which all run under systemd-nspawn). Those are generally running a Debian stable base so there should be a decent amount of commonality for deduping. I m not saying I m yet at the point where I ll default to btrfs on new installs, but I m definitely looking at it for situations where I think I can get benefits from deduplication, or being able to divide up disk space without hard partitioning space. (And, just to answer the worry I had when I started, I ve got nowhere near ENOSPC problems, but I believe they re handled much more gracefully these days. And my experience of ZFS when it got above 90% utilization was far from ideal too.)

28 March 2023

Shirish Agarwal: Three Ancient Civilizations, Tesla, IMU and other things.

Three Ancient Civilizations I have been reading books for a long time but somehow I don t know how or when I realized that there are three medieval civilizations that time and again seem to fascinate the authors, either American or European. The three civilizations that do get mentioned every now and then are Egyptian, Greece and Roman and I have no clue as to why. Another two civilizations closely follow them, Mesopotian and Sumerian Civilizations. Why most of the authors irrespective of genre are mystified by these 5 civilizations is beyond me but they also conjure lot of imagination. Now if I was on the right side of 40, maybe 22-25 onwards and had the means or the opportunity or both, I would have gone and tried to learn as much as I can about these various civilizations. There is still a lot of enigma attached and it seems that the official explanations of how these various civilizations ended seems too good to be true or whatever. One of the more interesting points has been how Greece mythology were subverted and flipped to make Roman mythology. Apparently, many Greek gods have a Roman aspect and the qualities are opposite to the Greek aspect. I haven t learned the reason why it is so, yet. Apart from the Iziko Natural Museum in South Africa which did have its share of wonders (the whole South African experience was surreal) nowhere I have witnessed stuff from the above other than in Africa. The whole thing seemed just surreal. I could go on but as I don t know what is real and what is mythology when it comes to various cultures including whatever little I experienced in Africa.

Recent History Having said the above, I also find that many Indians somehow do not either know or are not interested in understanding recent history. I am talking of the time between 1800s and 1950 s. British apologist Mr. William Dalrymple in his book Anarchy has shared how the East Indian Company looted India and gave less than half back to the British. So where did the rest of the money go ? It basically went to the tax free havens around UK. That tale is very much similar as to how the Axis gold was used to have an entity called Switzerland from scratch. There have been books as well as couple of miniseries or two that document that fact. But for me this is not the real story, this is more of a side dish per-se. The real story perhaps is how the EU peace project was born. Because of Hitler s rise and the way World War 2 happened, the Allies knew that they couldn t subjugate Germany through another humiliation as they had after World War 1, who knows another Hitler might come. So while they did fine the Germans, they also helped them via the Marshall plan. Germany also realized that it had been warring with other European countries for over a thousand years as well as quite a few other countries. The first sort of treaty immediately in that regard was the Western Union Alliance . While on the face of it, it was a defense treaty between sovereign powers. Interestingly, as can be seen UK at that point in time wanted more countries to be part of the Treaty of Dunkirk. This was quickly followed 2 years later by the Treaty of London which made way for the Council of Europe to be formed. The reason I am sharing is because a lot of Indians whom I meet on SM do not know that UK was instrumental front and center for the formation of European Union (EU). Also they perhaps don t realize that after World War 2, UK was greatly diminished both financially and militarily. That is the reason it gave back its erstwhile colonies including India and other countries. If UK hadn t become a part of EU they would have been called the poor man of Europe as they had been called few hundred years ago. In fact, after Brexit, UK has been the only nation that has fallen on the dire times as much as it has. They have practically no food, supermarkets running out of veggies and whatnot. electricity sky-high as they don t want to curtail profits of the gas companies which in turn donate money to Tory coffers. Sadly, most of my brethen do not know that hence I have had to share it. The EU became a power in its own right due to Gravity model of trade. The U.S. is attempting the same thing and calls it near off-shoring.

Tesla, Investor Day Presentation, Toyota and Free Software The above brings it nicely about the Tesla Investor Day that happened couple of weeks back. The biggest news though was broken just 24 hours earlier when the governor of Monterrey] shared how Giga New Mexico would be happening and shared some site photographs. There have been bits of news on that off-and-on since then. Toyota meanwhile has been putting lot of anti-EV posters and whatnot. In fact, India seems to be mirroring Japan in a lot of ways, the only difference is Japan is still superior economically than India. I do sincerely hope that at least Japan can get out of its lost decades. I have asked privately if any of the Japanese translators would be willing to translate the anti-EV poster so we may all know.
Anti-EV poster by Toyota

Urinal outside UK Embassy About a week back few people chanting Khalistan entered the Indian Embassy and showed the flags. This was in the UK. Now in a retaliatory move against a friendly nation, we want to make a Urinal. after making a security downgrade for the Embassy. The pettiness being shown by Indian Govt. especially when it has very few friends. There are also plans to do the same for the U.S. and other embassies as well. I do not know when we will get common sense.

Holi Just a few weeks back, Holi happened. It used to be a sweet and innocent festival. But from the last few years, I have been hearing and seeing sexual harassment on the rise. In fact, saw quite a few lewd posts written to women on Holi or verge of Holi and also videos of the same. One of the most shameful incidents occurred with a Japanese tourist. She was not only sexually molested but also terrorized that if she were to report then she could be raped and murdered. She promptly left Delhi. Once it was reported in mass media, Delhi Police tried to show it was doing something. FWIW, Delhi s crime against women stats have been at record high and conviction at record low.

Ecommerce Rules being changed, logistics gonna be tough. Just today there have been changes in Ecommerce rules (again) and this is gonna be a pain for almost all companies big and small with the exception of Adani and Ambani. Almost all players including the Tatas have called them out. Of course, all such laws have been passed without debate. In such a scenario, small startups like these cannot hope to grow their business.

Inertial Measurement Unit or Full Body Tracking with cheap hardware. Apparently, a whole host of companies are looking at 3-D tracking using cheap hardware apparently known as IMU sensors. Sooner than later cheap 3-D glasses and IMU sensors should explode the 3-D market worldwide. There is a huge potential and upside to it and will probably overtake smartphones as well. But then the danger will be of our thoughts, ideas, nightmares etc. to be shared without our consent. The more we tap into a virtual world, what stops anybody from tapping into our brain and practically stealing our identity in more than one way. I dunno if there are any Debian people or FSF projects working on the above. Even Laws can only do so much, until and unless there are alternative places and ways it would be difficult to say the least

Jonathan Dowland: daily log

I was asked on the Fediverse1 to describe my $dayjob daily workflow. Blogging about it seemed like a good opportunity to take stock of my current setup. The core of my approach is to maintain a daily log, rather like an engineering Lab Book2. My number one tip for anyone starting a career in Software Engineering (or for that matter systems administration) is to keep a daily log of what you intended to do, what you actually did, what you changed, what changed as a consequence, etcetera. I've already written about the tools I use for that: vimwiki. They key drawback of that system is managing TODO lists. In Vimwiki, tasks look like this:
  * [ ] buy milk (still todo)
  * [X] clean the car (done)
These get dispersed or duplicated around different days/wiki pages. It's hard to get a current view of open (or otherwise relevant) tasks, and impractical to update many copies of the same task when its state changes (such as when it's done). The solution I've adopted for now is another Vim plugin, taskwiki, which synchronises tasks with Taskwarrior3, an external task-management tool. If I mark a task as "done", Taskwiki updates all references to that task to reflect the new state. I can also construct queries to list all tasks matching some criteria. I have a special Vimwiki page named "Backlog" which runs the query "all tasks tagged 'redhat' in state 'pending'" (Linked from the boilerplate at the top of every page I write, for quick access):
= Backlog   +redhat status:pending =
* [ ] buy milk (still todo)
...
Much like the base Vimwiki plugin, Taskwiki is very opinionated, and I've had to tame it by disabling several of its features. I've also hit a couple of mildly frustrating bugs (#368, #425). I might one day have a go at writing an alternative, simpler plugin in Lua (Neovim's native scripting language), but for now it works well enough and I don't have the time. There's very little in this current workflow for managing scheduling tasks, and that's probably where the focus should be for my next iterative improvement efforts. I think Taskwarrior, the underlying tool, has some good support for that. I'd particularly like some more visual approaches for managing the backlog, such as something Kanban-style.

  1. sorry I can't find who/the toot any more
  2. Analogous to an engineering lab book, but the analogy is not exact. I recently read a little bit about what Engineers are actually taught to do in a lab book, and it's quite a bit more narrowly-scoped than I realised. This slide-deck is interesting: https://www.training.nih.gov/assets/Lab_Notebook_508_(new).pdf
  3. I rarely use Taskwarrior directly yet, and so I haven't written about it independently of Taskwiki..

Matt Brown: Ventilation Monitoring: Ensuring every space has clean, fresh air

The importance of clean, fresh indoor air is one of the most tangible takeaways of the Covid-19 pandemic. In addition to being an effective risk mitigation strategy for reducing the spread of respiratory illnesses, clean, fresh air is necessary to enable effective cognitive performance. Monitoring indoor air quality is relatively easy to do, but traditionally has not been a key focus. I believe air quality monitoring should be accessible for any indoor space, and for highly occupied indoor spaces should be provided on a continuous basis. This post explores the need and an opportunity for a business that can accelerate the adoption of ventilation monitoring through the following topics:

The importance of indoor air quality Clean, fresh air is fundamental to life and health. That might sound obvious, but unfortunately being obvious is not enough to ensure the air we breathe is in fact always clean and healthy. Repeated studies have revealed that in many cases the air you re breathing at school, in the bus or at work and probably also at home falls well below the ideal of what clean, fresh air should be. Unclean air has potential long-term health impacts and has also been shown to lower cognitive performance impacting the ability to learn and work as well as increasing the risk of transmission of respiratory illnesses like Covid-19 and the flu. Ventilation (replacing old stale air with clean fresh air) is the most effective and economical method of improving and maintaining high indoor air quality. Most New Zealand buildings (including schools and houses) are designed to rely on manual ventilation (opening windows), while newer buildings, often including larger or commercial buildings may use mechanical ventilation involving fans and ducts. Mechanical ventilation including filtration may also be required in situations where the outdoor air is not clean and fresh such as in a city or next to a busy intersection.

Observing the invisible Overall air quality is a complex topic involving many contributing factors, many of which are invisible and not perceptible to us until well after adverse effects or irritation occur. This complexity and lack of visible signal is a large contributing factor to the ignorance and lack of attention towards indoor air quality that is prevalent in most buildings and indoor spaces today. Our attention is biased towards the risks that we can see, and this default bias has not been helped by hesitation and resistance to the idea that aerosol transmission and air quality is an important factor in preventing disease transmission that has only recently started to change. Zeynep Tufekci has a great overview that provides fascinating context for how an overreaction to the early incorrect theories of bad air and miasma causing disease contributed to aerosol transmission and air quality being incorrectly neglected for so long. Correcting this history of inattention to indoor air quality is going to take time and effort, but one significant step that we can take to help start the journey to ensuring all indoor spaces have clean, healthy air is to make the invisible part visible. The concentration of carbon dioxide (CO2) in a space is an incredibly effective and easy to measure proxy for the ventilation of a space. The atmospheric background level of CO2 is around 420 parts per million (ppm), while our exhaled breath has concentrations as high as 40,000 ppm. Without effective ventilation, one or more people breathing in an enclosed space will rapidly lead to an observable increase in CO2 concentration, which in turn provides a signal that the ventilation is insufficient and needs to be improved. Monitoring CO2 and improving ventilation is not a panacea for all possible air quality issues, but for the majority of buildings and indoor spaces, using CO2 as a proxy for ventilation and increasing ventilation when CO2 levels rise above recommended levels is a simple, effective and achievable approach that will deliver improvements in cognitive performance and reduction in the risk of disease transmission with few, if any, downsides or risks. See this Public Health Communication Centre briefing for a more detailed explanation.

Adding clean air to our hygiene practices We have well established expectations of hygiene for the food we eat and the water we drink and these expectations are codified in regulations that ensure those providing these services do so in a way that gives us confidence that we re not going to be at risk of illness. You may recall seeing food grade ratings prominently displayed on the walls of restaurants and cafes that you visit as an example of this. Why should the air we breathe be treated any differently? I think there is a strong argument that indoor air quality deserves regulation, both of the absolute quality of the air and ensuring that the practices and achieved air quality are clearly advertised and available. Ventilation monitoring via measurement of CO2 concentration provides an effective and achievable method that can be used to achieve this, and countries like Belgium and Japan are already starting to regulate indoor air quality. In the UK, the independent SAGE group of scientists has published Scores on the Doors , a proposal which demonstrates how CO2 monitoring can be helpful in providing information about the air quality of indoor spaces. Unfortunately there is no movement in any of these directions in New Zealand yet, and no sign that regulation or even a basic campaign to raise awareness of ventilation and air quality is even being planned. This is disappointing, but even if such work was planned, it would still require appropriate ventilation monitoring products and services to enable it, and while there are some options available, it is not a fully solved space yet.

Existing ventilation monitoring options Until recently the available offerings for ventilation monitoring have sat at two distinct ends of the price and quality spectrum:
  • Handheld air quality meters advertised as measuring CO2, but in reality reporting only an approximation. These meters do not contain actual CO2 sensors, and only approximate CO2 levels based on measurements of other components of the air. While cheap (often less than $100), these meters are not useful for providing reliable data that can be systematically used to assess and improve ventilation and should be avoided.
  • High-end building management systems (BMS), and industrial measurement products targeted at large buildings such as offices or commercial applications such as food production. These systems require specialist installation, often integrated with large whole-building air conditioning systems. These systems, if appropriately configured, can be a great solution for the types of buildings and spaces that can afford them, but by their nature and cost, they do not offer a solution for the majority of smaller buildings and indoor spaces where we tend to spend a lot of our time.
Over the last few years a growing number of companies have developed products that fit in between the unreliable air quality meters and the expensive BMS/industrial measurement products. Promising NZ-based options in this space include Air Suite, Tether and Monkeytronics. These products are wall mountable, resemble a smoke alarm and utilise a WiFi network to report their measurements to a supporting web service. Pricing varies between $200 and $300 ex GST per unit. Aranet, while not NZ based, provides a handheld monitor the Aranet4 Home, which is well regarded for quality and accuracy. Aranet4 Home devices are the most expensive in this space, retailing at $386 ex GST and offer a clunkier and less convenient set of connectivity options via a Bluetooth connection to an associated phone. To obtain similar reporting functionality to the other products requires upgrading to their Pro model and purchasing a separate base station at a combined cost of $1255 ex GST. Outside the commercial product offerings are a number of open source DIY options, which can be built by anyone with basic electronics knowledge. AirGradient is a leading example based in Thailand, and within New Zealand Oliver Seiler s CO2 Monitor provides similar functionality. These open source options have a parts cost in the $100-$150 range, depending on volume built and provide high-quality measurements via trusted CO2 sensors while also offering huge flexibility in terms of how they operate, interact with users and potential supporting web services.

An opportunity: Small businesses and organisations While a growing number of high-quality CO2 monitors has the potential to help drive increased adoption of ventilation monitoring, the plethora of small businesses and organisations that own, operate and manage many of the indoor spaces we visit on a day-to-day basis do not appear to be well served by these existing products. To deploy ventilation monitoring a small business or organisation needs to first become aware of the need or demand for it, and then have a simple and easy path to acquire and install the monitor and access the data. Little to no marketing or demand generation appears to be targeted towards this market from the existing businesses and tellingly, several of the products are not directly available for sale, requiring interaction with a salesperson to purchase. This indicates a focus on selling to larger customers who have a campus or portfolio of buildings and will purchase in larger quantities than the typical small business or organisation will. Small businesses and organisations are likely to occupy smaller buildings and spaces where manual ventilation is the prevalent method of improving and maintaining air quality. Maintaining clean, fresh air via manual ventilation requires the occupants of the space to receive an obvious and straightforward signal when action (opening windows, etc) is required. While the products above all tend to provide some form of local feedback and display in the room, the indication provided and notification of when to take action is less obvious and prominent than would be ideal in a situation where manual ventilation is being relied upon. Informally testing this opportunity with family and friends running small businesses over the last few months has resulted in promising feedback. One particular success story was the discovery of a fresh air duct on the air conditioning unit in a small office that had never been connected to the outside air and was simply recirculating air from the ceiling space back into an office! The resulting stuffiness and poor air quality had been noticed, but without the clear indication from the CO2 monitor that the air conditioning was actually making things worse, rather than better, the underlying issue had not been understood. With the issue fixed and the duct now connected, that business is now enjoying much more productive and healthy working conditions.

Next steps Many small businesses and organisations are likely to have poor air quality and opportunities for improvement similar to the example above that are waiting to be found and fixed, and the existing products available are neither focused or ideal for the needs of this market. I have spent some time over the past six months building a basic CO2 monitoring service that I have used to deploy ventilation monitoring to our local school, and a few other local businesses. There are a number of challenges that still need to be addressed in order to scale the business up, but I think there is a reasonable chance that I can build a viable business that offers an attractive and useful solution that would accelerate the deployment of ventilation monitoring for small businesses and organisations. In an upcoming post, I will explain the foundations of the service that I have built to date, the challenges that need to be overcome and how I plan to evolve the service from the current prototype into a sustainable, bootstrapped business.

27 March 2023

Simon Josefsson: OpenPGP master key on Nitrokey Start

I ve used hardware-backed OpenPGP keys since 2006 when I imported newly generated rsa1024 subkeys to a FSFE Fellowship card. This worked well for several years, and I recall buying more ZeitControl cards for multi-machine usage and backup purposes. As a side note, I recall being unsatisfied with the weak 1024-bit RSA subkeys at the time my primary key was a somewhat stronger 1280-bit RSA key created back in 2002 but OpenPGP cards at the time didn t support more than 1024 bit RSA, and were (and still often are) also limited to power-of-two RSA key sizes which I dislike. I had my master key on disk with a strong password for a while, mostly to refresh expiration time of the subkeys and to sign other s OpenPGP keys. At some point I stopped carrying around encrypted copies of my master key. That was my main setup when I migrated to a new stronger RSA 3744 bit key with rsa2048 subkeys on a YubiKey NEO back in 2014. At that point, signing other s OpenPGP keys was a rare enough occurrence that I settled with bringing out my offline machine to perform this operation, transferring the public key to sign on USB sticks. In 2019 I re-evaluated my OpenPGP setup and ended up creating a offline Ed25519 key with subkeys on a FST-01G running Gnuk. My approach for signing other s OpenPGP keys were still to bring out my offline machine and sign things using the master secret using USB sticks for storage and transport. Which meant I almost never did that, because it took too much effort. So my 2019-era Ed25519 key still only has a handful of signatures on it, since I had essentially stopped signing other s keys which is the traditional way of getting signatures in return. None of this caused any critical problem for me because I continued to use my old 2014-era RSA3744 key in parallel with my new 2019-era Ed25519 key, since too many systems didn t handle Ed25519. However, during 2022 this changed, and the only remaining environment that I still used my RSA3744 key for was in Debian and they require OpenPGP signatures on the new key to allow it to replace an older key. I was in denial about this sub-optimal solution during 2022 and endured its practical consequences, having to use the YubiKey NEO (which I had replaced with a permanently inserted YubiKey Nano at some point) for Debian-related purposes alone. In December 2022 I bought a new laptop and setup a FST-01SZ with my Ed25519 key, and while I have taken a vacation from Debian, I continue to extend the expiration period on the old RSA3744-key in case I will ever have to use it again, so the overall OpenPGP setup was still sub-optimal. Having two valid OpenPGP keys at the same time causes people to use both for email encryption (leading me to have to use both devices), and the WKD Key Discovery protocol doesn t like two valid keys either. At FOSDEM 23 I ran into Andre Heinecke at GnuPG and I couldn t help complain about how complex and unsatisfying all OpenPGP-related matters were, and he mildly ignored my rant and asked why I didn t put the master key on another smartcard. The comment sunk in when I came home, and recently I connected all the dots and this post is a summary of what I did to move my offline OpenPGP master key to a Nitrokey Start. First a word about device choice, I still prefer to use hardware devices that are as compatible with free software as possible, but the FST-01G or FST-01SZ are no longer easily available for purchase. I got a comment about Nitrokey start in my last post, and had two of them available to experiment with. There are things to dislike with the Nitrokey Start compared to the YubiKey (e.g., relative insecure chip architecture, the bulkier form factor and lack of FIDO/U2F/OATH support) but as far as I know there is no more widely available owner-controlled device that is manufactured for an intended purpose of implementing an OpenPGP card. Thus it hits the sweet spot for me.
Nitrokey Start
The first step is to run latest firmware on the Nitrokey Start for bug-fixes and important OpenSSH 9.0 compatibility and there are reproducible-built firmware published that you can install using pynitrokey. I run Trisquel 11 aramo on my laptop, which does not include the Python Pip package (likely because it promotes installing non-free software) so that was a slight complication. Building the firmware locally may have worked, and I would like to do that eventually to confirm the published firmware, however to save time I settled with installing the Ubuntu 22.04 packages on my machine:
$ sha256sum python3-pip*
ded6b3867a4a4cbaff0940cab366975d6aeecc76b9f2d2efa3deceb062668b1c  python3-pip_22.0.2+dfsg-1ubuntu0.2_all.deb
e1561575130c41dc3309023a345de337e84b4b04c21c74db57f599e267114325  python3-pip-whl_22.0.2+dfsg-1ubuntu0.2_all.deb
$ doas dpkg -i python3-pip*
...
$ doas apt install -f
...
$
Installing pynitrokey downloaded a bunch of dependencies, and it would be nice to audit the license and security vulnerabilities for each of them. (Verbose output below slightly redacted.)
jas@kaka:~$ pip3 install --user pynitrokey
Collecting pynitrokey
  Downloading pynitrokey-0.4.34-py3-none-any.whl (572 kB)
Collecting frozendict~=2.3.4
  Downloading frozendict-2.3.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (113 kB)
Requirement already satisfied: click<9,>=8.0.0 in /usr/lib/python3/dist-packages (from pynitrokey) (8.0.3)
Collecting ecdsa
  Downloading ecdsa-0.18.0-py2.py3-none-any.whl (142 kB)
Collecting python-dateutil~=2.7.0
  Downloading python_dateutil-2.7.5-py2.py3-none-any.whl (225 kB)
Collecting fido2<2,>=1.1.0
  Downloading fido2-1.1.0-py3-none-any.whl (201 kB)
Collecting tlv8
  Downloading tlv8-0.10.0.tar.gz (16 kB)
  Preparing metadata (setup.py) ... done
Requirement already satisfied: certifi>=14.5.14 in /usr/lib/python3/dist-packages (from pynitrokey) (2020.6.20)
Requirement already satisfied: pyusb in /usr/lib/python3/dist-packages (from pynitrokey) (1.2.1.post1)
Collecting urllib3~=1.26.7
  Downloading urllib3-1.26.15-py2.py3-none-any.whl (140 kB)
Collecting spsdk<1.8.0,>=1.7.0
  Downloading spsdk-1.7.1-py3-none-any.whl (684 kB)
Collecting typing_extensions~=4.3.0
  Downloading typing_extensions-4.3.0-py3-none-any.whl (25 kB)
Requirement already satisfied: cryptography<37,>=3.4.4 in /usr/lib/python3/dist-packages (from pynitrokey) (3.4.8)
Collecting intelhex
  Downloading intelhex-2.3.0-py2.py3-none-any.whl (50 kB)
Collecting nkdfu
  Downloading nkdfu-0.2-py3-none-any.whl (16 kB)
Requirement already satisfied: requests in /usr/lib/python3/dist-packages (from pynitrokey) (2.25.1)
Collecting tqdm
  Downloading tqdm-4.65.0-py3-none-any.whl (77 kB)
Collecting nrfutil<7,>=6.1.4
  Downloading nrfutil-6.1.7.tar.gz (845 kB)
  Preparing metadata (setup.py) ... done
Requirement already satisfied: cffi in /usr/lib/python3/dist-packages (from pynitrokey) (1.15.0)
Collecting crcmod
  Downloading crcmod-1.7.tar.gz (89 kB)
  Preparing metadata (setup.py) ... done
Collecting libusb1==1.9.3
  Downloading libusb1-1.9.3-py3-none-any.whl (60 kB)
Collecting pc_ble_driver_py>=0.16.4
  Downloading pc_ble_driver_py-0.17.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.9 MB)
Collecting piccata
  Downloading piccata-2.0.3-py3-none-any.whl (21 kB)
Collecting protobuf<4.0.0,>=3.17.3
  Downloading protobuf-3.20.3-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)
Collecting pyserial
  Downloading pyserial-3.5-py2.py3-none-any.whl (90 kB)
Collecting pyspinel>=1.0.0a3
  Downloading pyspinel-1.0.3.tar.gz (58 kB)
  Preparing metadata (setup.py) ... done
Requirement already satisfied: pyyaml in /usr/lib/python3/dist-packages (from nrfutil<7,>=6.1.4->pynitrokey) (5.4.1)
Requirement already satisfied: six>=1.5 in /usr/lib/python3/dist-packages (from python-dateutil~=2.7.0->pynitrokey) (1.16.0)
Collecting pylink-square<0.11.9,>=0.8.2
  Downloading pylink_square-0.11.1-py2.py3-none-any.whl (78 kB)
Collecting jinja2<3.1,>=2.11
  Downloading Jinja2-3.0.3-py3-none-any.whl (133 kB)
Collecting bincopy<17.11,>=17.10.2
  Downloading bincopy-17.10.3-py3-none-any.whl (17 kB)
Collecting fastjsonschema>=2.15.1
  Downloading fastjsonschema-2.16.3-py3-none-any.whl (23 kB)
Collecting astunparse<2,>=1.6
  Downloading astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting oscrypto~=1.2
  Downloading oscrypto-1.3.0-py2.py3-none-any.whl (194 kB)
Collecting deepmerge==0.3.0
  Downloading deepmerge-0.3.0-py2.py3-none-any.whl (7.6 kB)
Collecting pyocd<=0.31.0,>=0.28.3
  Downloading pyocd-0.31.0-py3-none-any.whl (12.5 MB)
Collecting click-option-group<0.6,>=0.3.0
  Downloading click_option_group-0.5.5-py3-none-any.whl (12 kB)
Collecting pycryptodome<4,>=3.9.3
  Downloading pycryptodome-3.17-cp35-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.1 MB)
Collecting pyocd-pemicro<1.2.0,>=1.1.1
  Downloading pyocd_pemicro-1.1.5-py3-none-any.whl (9.0 kB)
Requirement already satisfied: colorama<1,>=0.4.4 in /usr/lib/python3/dist-packages (from spsdk<1.8.0,>=1.7.0->pynitrokey) (0.4.4)
Collecting commentjson<1,>=0.9
  Downloading commentjson-0.9.0.tar.gz (8.7 kB)
  Preparing metadata (setup.py) ... done
Requirement already satisfied: asn1crypto<2,>=1.2 in /usr/lib/python3/dist-packages (from spsdk<1.8.0,>=1.7.0->pynitrokey) (1.4.0)
Collecting pypemicro<0.2.0,>=0.1.9
  Downloading pypemicro-0.1.11-py3-none-any.whl (5.7 MB)
Collecting libusbsio>=2.1.11
  Downloading libusbsio-2.1.11-py3-none-any.whl (247 kB)
Collecting sly==0.4
  Downloading sly-0.4.tar.gz (60 kB)
  Preparing metadata (setup.py) ... done
Collecting ruamel.yaml<0.18.0,>=0.17
  Downloading ruamel.yaml-0.17.21-py3-none-any.whl (109 kB)
Collecting cmsis-pack-manager<0.3.0
  Downloading cmsis_pack_manager-0.2.10-py2.py3-none-manylinux1_x86_64.whl (25.1 MB)
Collecting click-command-tree==1.1.0
  Downloading click_command_tree-1.1.0-py3-none-any.whl (3.6 kB)
Requirement already satisfied: bitstring<3.2,>=3.1 in /usr/lib/python3/dist-packages (from spsdk<1.8.0,>=1.7.0->pynitrokey) (3.1.7)
Collecting hexdump~=3.3
  Downloading hexdump-3.3.zip (12 kB)
  Preparing metadata (setup.py) ... done
Collecting fire
  Downloading fire-0.5.0.tar.gz (88 kB)
  Preparing metadata (setup.py) ... done
Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/lib/python3/dist-packages (from astunparse<2,>=1.6->spsdk<1.8.0,>=1.7.0->pynitrokey) (0.37.1)
Collecting humanfriendly
  Downloading humanfriendly-10.0-py2.py3-none-any.whl (86 kB)
Collecting argparse-addons>=0.4.0
  Downloading argparse_addons-0.12.0-py3-none-any.whl (3.3 kB)
Collecting pyelftools
  Downloading pyelftools-0.29-py2.py3-none-any.whl (174 kB)
Collecting milksnake>=0.1.2
  Downloading milksnake-0.1.5-py2.py3-none-any.whl (9.6 kB)
Requirement already satisfied: appdirs>=1.4 in /usr/lib/python3/dist-packages (from cmsis-pack-manager<0.3.0->spsdk<1.8.0,>=1.7.0->pynitrokey) (1.4.4)
Collecting lark-parser<0.8.0,>=0.7.1
  Downloading lark-parser-0.7.8.tar.gz (276 kB)
  Preparing metadata (setup.py) ... done
Requirement already satisfied: MarkupSafe>=2.0 in /usr/lib/python3/dist-packages (from jinja2<3.1,>=2.11->spsdk<1.8.0,>=1.7.0->pynitrokey) (2.0.1)
Collecting asn1crypto<2,>=1.2
  Downloading asn1crypto-1.5.1-py2.py3-none-any.whl (105 kB)
Collecting wrapt
  Downloading wrapt-1.15.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (78 kB)
Collecting future
  Downloading future-0.18.3.tar.gz (840 kB)
  Preparing metadata (setup.py) ... done
Collecting psutil>=5.2.2
  Downloading psutil-5.9.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (280 kB)
Collecting capstone<5.0,>=4.0
  Downloading capstone-4.0.2-py2.py3-none-manylinux1_x86_64.whl (2.1 MB)
Collecting naturalsort<2.0,>=1.5
  Downloading naturalsort-1.5.1.tar.gz (7.4 kB)
  Preparing metadata (setup.py) ... done
Collecting prettytable<3.0,>=2.0
  Downloading prettytable-2.5.0-py3-none-any.whl (24 kB)
Collecting intervaltree<4.0,>=3.0.2
  Downloading intervaltree-3.1.0.tar.gz (32 kB)
  Preparing metadata (setup.py) ... done
Collecting ruamel.yaml.clib>=0.2.6
  Downloading ruamel.yaml.clib-0.2.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (485 kB)
Collecting termcolor
  Downloading termcolor-2.2.0-py3-none-any.whl (6.6 kB)
Collecting sortedcontainers<3.0,>=2.0
  Downloading sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB)
Requirement already satisfied: wcwidth in /usr/lib/python3/dist-packages (from prettytable<3.0,>=2.0->pyocd<=0.31.0,>=0.28.3->spsdk<1.8.0,>=1.7.0->pynitrokey) (0.2.5)
Building wheels for collected packages: nrfutil, crcmod, sly, tlv8, commentjson, hexdump, pyspinel, fire, intervaltree, lark-parser, naturalsort, future
  Building wheel for nrfutil (setup.py) ... done
  Created wheel for nrfutil: filename=nrfutil-6.1.7-py3-none-any.whl size=898520 sha256=de6f8803f51d6c26d24dc7df6292064a468ff3f389d73370433fde5582b84a10
  Stored in directory: /home/jas/.cache/pip/wheels/39/2b/9b/98ab2dd716da746290e6728bdb557b14c1c9a54cb9ed86e13b
  Building wheel for crcmod (setup.py) ... done
  Created wheel for crcmod: filename=crcmod-1.7-cp310-cp310-linux_x86_64.whl size=31422 sha256=5149ac56fcbfa0606760eef5220fcedc66be560adf68cf38c604af3ad0e4a8b0
  Stored in directory: /home/jas/.cache/pip/wheels/85/4c/07/72215c529bd59d67e3dac29711d7aba1b692f543c808ba9e86
  Building wheel for sly (setup.py) ... done
  Created wheel for sly: filename=sly-0.4-py3-none-any.whl size=27352 sha256=f614e413918de45c73d1e9a8dca61ca07dc760d9740553400efc234c891f7fde
  Stored in directory: /home/jas/.cache/pip/wheels/a2/23/4a/6a84282a0d2c29f003012dc565b3126e427972e8b8157ea51f
  Building wheel for tlv8 (setup.py) ... done
  Created wheel for tlv8: filename=tlv8-0.10.0-py3-none-any.whl size=11266 sha256=3ec8b3c45977a3addbc66b7b99e1d81b146607c3a269502b9b5651900a0e2d08
  Stored in directory: /home/jas/.cache/pip/wheels/e9/35/86/66a473cc2abb0c7f21ed39c30a3b2219b16bd2cdb4b33cfc2c
  Building wheel for commentjson (setup.py) ... done
  Created wheel for commentjson: filename=commentjson-0.9.0-py3-none-any.whl size=12092 sha256=28b6413132d6d7798a18cf8c76885dc69f676ea763ffcb08775a3c2c43444f4a
  Stored in directory: /home/jas/.cache/pip/wheels/7d/90/23/6358a234ca5b4ec0866d447079b97fedf9883387d1d7d074e5
  Building wheel for hexdump (setup.py) ... done
  Created wheel for hexdump: filename=hexdump-3.3-py3-none-any.whl size=8913 sha256=79dfadd42edbc9acaeac1987464f2df4053784fff18b96408c1309b74fd09f50
  Stored in directory: /home/jas/.cache/pip/wheels/26/28/f7/f47d7ecd9ae44c4457e72c8bb617ef18ab332ee2b2a1047e87
  Building wheel for pyspinel (setup.py) ... done
  Created wheel for pyspinel: filename=pyspinel-1.0.3-py3-none-any.whl size=65033 sha256=01dc27f81f28b4830a0cf2336dc737ef309a1287fcf33f57a8a4c5bed3b5f0a6
  Stored in directory: /home/jas/.cache/pip/wheels/95/ec/4b/6e3e2ee18e7292d26a65659f75d07411a6e69158bb05507590
  Building wheel for fire (setup.py) ... done
  Created wheel for fire: filename=fire-0.5.0-py2.py3-none-any.whl size=116951 sha256=3d288585478c91a6914629eb739ea789828eb2d0267febc7c5390cb24ba153e8
  Stored in directory: /home/jas/.cache/pip/wheels/90/d4/f7/9404e5db0116bd4d43e5666eaa3e70ab53723e1e3ea40c9a95
  Building wheel for intervaltree (setup.py) ... done
  Created wheel for intervaltree: filename=intervaltree-3.1.0-py2.py3-none-any.whl size=26119 sha256=5ff1def22ba883af25c90d90ef7c6518496fcd47dd2cbc53a57ec04cd60dc21d
  Stored in directory: /home/jas/.cache/pip/wheels/fa/80/8c/43488a924a046b733b64de3fac99252674c892a4c3801c0a61
  Building wheel for lark-parser (setup.py) ... done
  Created wheel for lark-parser: filename=lark_parser-0.7.8-py2.py3-none-any.whl size=62527 sha256=3d2ec1d0f926fc2688d40777f7ef93c9986f874169132b1af590b6afc038f4be
  Stored in directory: /home/jas/.cache/pip/wheels/29/30/94/33e8b58318aa05cb1842b365843036e0280af5983abb966b83
  Building wheel for naturalsort (setup.py) ... done
  Created wheel for naturalsort: filename=naturalsort-1.5.1-py3-none-any.whl size=7526 sha256=bdecac4a49f2416924548cae6c124c85d5333e9e61c563232678ed182969d453
  Stored in directory: /home/jas/.cache/pip/wheels/a6/8e/c9/98cfa614fff2979b457fa2d9ad45ec85fa417e7e3e2e43be51
  Building wheel for future (setup.py) ... done
  Created wheel for future: filename=future-0.18.3-py3-none-any.whl size=492037 sha256=57a01e68feca2b5563f5f624141267f399082d2f05f55886f71b5d6e6cf2b02c
  Stored in directory: /home/jas/.cache/pip/wheels/5e/a9/47/f118e66afd12240e4662752cc22cefae5d97275623aa8ef57d
Successfully built nrfutil crcmod sly tlv8 commentjson hexdump pyspinel fire intervaltree lark-parser naturalsort future
Installing collected packages: tlv8, sortedcontainers, sly, pyserial, pyelftools, piccata, naturalsort, libusb1, lark-parser, intelhex, hexdump, fastjsonschema, crcmod, asn1crypto, wrapt, urllib3, typing_extensions, tqdm, termcolor, ruamel.yaml.clib, python-dateutil, pyspinel, pypemicro, pycryptodome, psutil, protobuf, prettytable, oscrypto, milksnake, libusbsio, jinja2, intervaltree, humanfriendly, future, frozendict, fido2, ecdsa, deepmerge, commentjson, click-option-group, click-command-tree, capstone, astunparse, argparse-addons, ruamel.yaml, pyocd-pemicro, pylink-square, pc_ble_driver_py, fire, cmsis-pack-manager, bincopy, pyocd, nrfutil, nkdfu, spsdk, pynitrokey
  WARNING: The script nitropy is installed in '/home/jas/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed argparse-addons-0.12.0 asn1crypto-1.5.1 astunparse-1.6.3 bincopy-17.10.3 capstone-4.0.2 click-command-tree-1.1.0 click-option-group-0.5.5 cmsis-pack-manager-0.2.10 commentjson-0.9.0 crcmod-1.7 deepmerge-0.3.0 ecdsa-0.18.0 fastjsonschema-2.16.3 fido2-1.1.0 fire-0.5.0 frozendict-2.3.5 future-0.18.3 hexdump-3.3 humanfriendly-10.0 intelhex-2.3.0 intervaltree-3.1.0 jinja2-3.0.3 lark-parser-0.7.8 libusb1-1.9.3 libusbsio-2.1.11 milksnake-0.1.5 naturalsort-1.5.1 nkdfu-0.2 nrfutil-6.1.7 oscrypto-1.3.0 pc_ble_driver_py-0.17.0 piccata-2.0.3 prettytable-2.5.0 protobuf-3.20.3 psutil-5.9.4 pycryptodome-3.17 pyelftools-0.29 pylink-square-0.11.1 pynitrokey-0.4.34 pyocd-0.31.0 pyocd-pemicro-1.1.5 pypemicro-0.1.11 pyserial-3.5 pyspinel-1.0.3 python-dateutil-2.7.5 ruamel.yaml-0.17.21 ruamel.yaml.clib-0.2.7 sly-0.4 sortedcontainers-2.4.0 spsdk-1.7.1 termcolor-2.2.0 tlv8-0.10.0 tqdm-4.65.0 typing_extensions-4.3.0 urllib3-1.26.15 wrapt-1.15.0
jas@kaka:~$
Then upgrading the device worked remarkable well, although I wish that the tool would have printed URLs and checksums for the firmware files to allow easy confirmation.
jas@kaka:~$ PATH=$PATH:/home/jas/.local/bin
jas@kaka:~$ nitropy start list
Command line tool to interact with Nitrokey devices 0.4.34
:: 'Nitrokey Start' keys:
FSIJ-1.2.15-5D271572: Nitrokey Nitrokey Start (RTM.12.1-RC2-modified)
jas@kaka:~$ nitropy start update
Command line tool to interact with Nitrokey devices 0.4.34
Nitrokey Start firmware update tool
Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
System: Linux, is_linux: True
Python: 3.10.6
Saving run log to: /tmp/nitropy.log.gc5753a8
Admin PIN: 
Firmware data to be used:
- FirmwareType.REGNUAL: 4408, hash: ...b'72a30389' valid (from ...built/RTM.13/regnual.bin)
- FirmwareType.GNUK: 129024, hash: ...b'25a4289b' valid (from ...prebuilt/RTM.13/gnuk.bin)
Currently connected device strings:
Device: 
    Vendor: Nitrokey
   Product: Nitrokey Start
    Serial: FSIJ-1.2.15-5D271572
  Revision: RTM.12.1-RC2-modified
    Config: *:*:8e82
       Sys: 3.0
     Board: NITROKEY-START-G
initial device strings: [ 'name': '', 'Vendor': 'Nitrokey', 'Product': 'Nitrokey Start', 'Serial': 'FSIJ-1.2.15-5D271572', 'Revision': 'RTM.12.1-RC2-modified', 'Config': '*:*:8e82', 'Sys': '3.0', 'Board': 'NITROKEY-START-G' ]
Please note:
- Latest firmware available is: 
  RTM.13 (published: 2022-12-08T10:59:11Z)
- provided firmware: None
- all data will be removed from the device!
- do not interrupt update process - the device may not run properly!
- the process should not take more than 1 minute
Do you want to continue? [yes/no]: yes
...
Starting bootloader upload procedure
Device: Nitrokey Start FSIJ-1.2.15-5D271572
Connected to the device
Running update!
Do NOT remove the device from the USB slot, until further notice
Downloading flash upgrade program...
Executing flash upgrade...
Waiting for device to appear:
  Wait 20 seconds.....
Downloading the program
Protecting device
Finish flashing
Resetting device
Update procedure finished. Device could be removed from USB slot.
Currently connected device strings (after upgrade):
Device: 
    Vendor: Nitrokey
   Product: Nitrokey Start
    Serial: FSIJ-1.2.19-5D271572
  Revision: RTM.13
    Config: *:*:8e82
       Sys: 3.0
     Board: NITROKEY-START-G
device can now be safely removed from the USB slot
final device strings: [ 'name': '', 'Vendor': 'Nitrokey', 'Product': 'Nitrokey Start', 'Serial': 'FSIJ-1.2.19-5D271572', 'Revision': 'RTM.13', 'Config': '*:*:8e82', 'Sys': '3.0', 'Board': 'NITROKEY-START-G' ]
finishing session 2023-03-16 21:49:07.371291
Log saved to: /tmp/nitropy.log.gc5753a8
jas@kaka:~$ 
jas@kaka:~$ nitropy start list
Command line tool to interact with Nitrokey devices 0.4.34
:: 'Nitrokey Start' keys:
FSIJ-1.2.19-5D271572: Nitrokey Nitrokey Start (RTM.13)
jas@kaka:~$ 
Before importing the master key to this device, it should be configured. Note the commands in the beginning to make sure scdaemon/pcscd is not running because they may have cached state from earlier cards. Change PIN code as you like after this, my experience with Gnuk was that the Admin PIN had to be changed first, then you import the key, and then you change the PIN.
jas@kaka:~$ gpg-connect-agent "SCD KILLSCD" "SCD BYE" /bye
OK
ERR 67125247 Slut p  fil <GPG Agent>
jas@kaka:~$ ps auxww grep -e pcsc -e scd
jas        11651  0.0  0.0   3468  1672 pts/0    R+   21:54   0:00 grep --color=auto -e pcsc -e scd
jas@kaka:~$ gpg --card-edit
Reader ...........: 20A0:4211:FSIJ-1.2.19-5D271572:0
Application ID ...: D276000124010200FFFE5D2715720000
Application type .: OpenPGP
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 5D271572
Name of cardholder: [not set]
Language prefs ...: [not set]
Salutation .......: 
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 0
KDF setting ......: off
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]
gpg/card> admin
Admin commands are allowed
gpg/card> kdf-setup
gpg/card> passwd
gpg: OpenPGP card no. D276000124010200FFFE5D2715720000 detected
1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit
Your selection? 3
PIN changed.
1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit
Your selection? q
gpg/card> name
Cardholder's surname: Josefsson
Cardholder's given name: Simon
gpg/card> lang
Language preferences: sv
gpg/card> sex
Salutation (M = Mr., F = Ms., or space): m
gpg/card> login
Login data (account name): jas
gpg/card> url
URL to retrieve public key: https://josefsson.org/key-20190320.txt
gpg/card> forcesig
gpg/card> key-attr
Changing card key attribute for: Signature key
Please select what kind of key you want:
   (1) RSA
   (2) ECC
Your selection? 2
Please select which elliptic curve you want:
   (1) Curve 25519
   (4) NIST P-384
Your selection? 1
The card will now be re-configured to generate a key of type: ed25519
Note: There is no guarantee that the card supports the requested size.
      If the key generation does not succeed, please check the
      documentation of your card to see what sizes are allowed.
Changing card key attribute for: Encryption key
Please select what kind of key you want:
   (1) RSA
   (2) ECC
Your selection? 2
Please select which elliptic curve you want:
   (1) Curve 25519
   (4) NIST P-384
Your selection? 1
The card will now be re-configured to generate a key of type: cv25519
Changing card key attribute for: Authentication key
Please select what kind of key you want:
   (1) RSA
   (2) ECC
Your selection? 2
Please select which elliptic curve you want:
   (1) Curve 25519
   (4) NIST P-384
Your selection? 1
The card will now be re-configured to generate a key of type: ed25519
gpg/card> 
jas@kaka:~$ gpg --card-edit
Reader ...........: 20A0:4211:FSIJ-1.2.19-5D271572:0
Application ID ...: D276000124010200FFFE5D2715720000
Application type .: OpenPGP
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 5D271572
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Salutation .......: Mr.
URL of public key : https://josefsson.org/key-20190320.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: ed25519 cv25519 ed25519
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 0
KDF setting ......: on
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]
jas@kaka:~$ 
Once setup, bring out your offline machine and boot it and mount your USB stick with the offline key. The paths below will be different, and this is using a somewhat unorthodox approach of working with fresh GnuPG configuration paths that I chose for the USB stick.
jas@kaka:/media/jas/2c699cbd-b77e-4434-a0d6-0c4965864296$ cp -a gnupghome-backup-masterkey gnupghome-import-nitrokey-5D271572
jas@kaka:/media/jas/2c699cbd-b77e-4434-a0d6-0c4965864296$ gpg --homedir $PWD/gnupghome-import-nitrokey-5D271572 --edit-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Secret key is available.
sec  ed25519/D73CF638C53C06BE
     created: 2019-03-20  expired: 2019-10-22  usage: SC  
     trust: ultimate      validity: expired
[ expired] (1). Simon Josefsson <simon@josefsson.org>
gpg> keytocard
Really move the primary key? (y/N) y
Please select where to store the key:
   (1) Signature key
   (3) Authentication key
Your selection? 1
sec  ed25519/D73CF638C53C06BE
     created: 2019-03-20  expired: 2019-10-22  usage: SC  
     trust: ultimate      validity: expired
[ expired] (1). Simon Josefsson <simon@josefsson.org>
gpg> 
Save changes? (y/N) y
jas@kaka:/media/jas/2c699cbd-b77e-4434-a0d6-0c4965864296$ 
At this point it is useful to confirm that the Nitrokey has the master key available and that is possible to sign statements with it, back on your regular machine:
jas@kaka:~$ gpg --card-status
Reader ...........: 20A0:4211:FSIJ-1.2.19-5D271572:0
Application ID ...: D276000124010200FFFE5D2715720000
Application type .: OpenPGP
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 5D271572
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Salutation .......: Mr.
URL of public key : https://josefsson.org/key-20190320.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: ed25519 cv25519 ed25519
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 1
KDF setting ......: on
Signature key ....: B1D2 BD13 75BE CB78 4CF4  F8C4 D73C F638 C53C 06BE
      created ....: 2019-03-20 23:37:24
Encryption key....: [none]
Authentication key: [none]
General key info..: pub  ed25519/D73CF638C53C06BE 2019-03-20 Simon Josefsson <simon@josefsson.org>
sec>  ed25519/D73CF638C53C06BE  created: 2019-03-20  expires: 2023-09-19
                                card-no: FFFE 5D271572
ssb>  ed25519/80260EE8A9B92B2B  created: 2019-03-20  expires: 2023-09-19
                                card-no: FFFE 42315277
ssb>  ed25519/51722B08FE4745A2  created: 2019-03-20  expires: 2023-09-19
                                card-no: FFFE 42315277
ssb>  cv25519/02923D7EE76EBD60  created: 2019-03-20  expires: 2023-09-19
                                card-no: FFFE 42315277
jas@kaka:~$ echo foo gpg -a --sign gpg --verify
gpg: Signature made Thu Mar 16 22:11:02 2023 CET
gpg:                using EDDSA key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
gpg: Good signature from "Simon Josefsson <simon@josefsson.org>" [ultimate]
jas@kaka:~$ 
Finally to retrieve and sign a key, for example Andre Heinecke s that I could confirm the OpenPGP key identifier from his business card.
jas@kaka:~$ gpg --locate-external-keys aheinecke@gnupg.com
gpg: key 1FDF723CF462B6B1: public key "Andre Heinecke <aheinecke@gnupg.com>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: marginals needed: 3  completes needed: 1  trust model: pgp
gpg: depth: 0  valid:   2  signed:   7  trust: 0-, 0q, 0n, 0m, 0f, 2u
gpg: depth: 1  valid:   7  signed:  64  trust: 7-, 0q, 0n, 0m, 0f, 0u
gpg: next trustdb check due at 2023-05-26
pub   rsa3072 2015-12-08 [SC] [expires: 2025-12-05]
      94A5C9A03C2FE5CA3B095D8E1FDF723CF462B6B1
uid           [ unknown] Andre Heinecke <aheinecke@gnupg.com>
sub   ed25519 2017-02-13 [S]
sub   ed25519 2017-02-13 [A]
sub   rsa3072 2015-12-08 [E] [expires: 2025-12-05]
sub   rsa3072 2015-12-08 [A] [expires: 2025-12-05]
jas@kaka:~$ gpg --edit-key "94A5C9A03C2FE5CA3B095D8E1FDF723CF462B6B1"
gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
pub  rsa3072/1FDF723CF462B6B1
     created: 2015-12-08  expires: 2025-12-05  usage: SC  
     trust: unknown       validity: unknown
sub  ed25519/2978E9D40CBABA5C
     created: 2017-02-13  expires: never       usage: S   
sub  ed25519/DC74D901C8E2DD47
     created: 2017-02-13  expires: never       usage: A   
The following key was revoked on 2017-02-23 by RSA key 1FDF723CF462B6B1 Andre Heinecke <aheinecke@gnupg.com>
sub  cv25519/1FFE3151683260AB
     created: 2017-02-13  revoked: 2017-02-23  usage: E   
sub  rsa3072/8CC999BDAA45C71F
     created: 2015-12-08  expires: 2025-12-05  usage: E   
sub  rsa3072/6304A4B539CE444A
     created: 2015-12-08  expires: 2025-12-05  usage: A   
[ unknown] (1). Andre Heinecke <aheinecke@gnupg.com>
gpg> sign
pub  rsa3072/1FDF723CF462B6B1
     created: 2015-12-08  expires: 2025-12-05  usage: SC  
     trust: unknown       validity: unknown
 Primary key fingerprint: 94A5 C9A0 3C2F E5CA 3B09  5D8E 1FDF 723C F462 B6B1
     Andre Heinecke <aheinecke@gnupg.com>
This key is due to expire on 2025-12-05.
Are you sure that you want to sign this key with your
key "Simon Josefsson <simon@josefsson.org>" (D73CF638C53C06BE)
Really sign? (y/N) y
gpg> quit
Save changes? (y/N) y
jas@kaka:~$ 
This is on my day-to-day machine, using the NitroKey Start with the offline key. No need to boot the old offline machine just to sign keys or extend expiry anymore! At FOSDEM 23 I managed to get at least one DD signature on my new key, and the Debian keyring maintainers accepted my Ed25519 key. Hopefully I can now finally let my 2014-era RSA3744 key expire in 2023-09-19 and not extend it any further. This should finish my transition to a simpler OpenPGP key setup, yay!

24 March 2023

Matthew Garrett: We need better support for SSH host certificates

Github accidentally committed their SSH RSA private key to a repository, and now a bunch of people's infrastructure is broken because it needs to be updated to trust the new key. This is obviously bad, but what's frustrating is that there's no inherent need for it to be - almost all the technological components needed to both reduce the initial risk and to make the transition seamless already exist.

But first, let's talk about what actually happened here. You're probably used to the idea of TLS certificates from using browsers. Every website that supports TLS has an asymmetric pair of keys divided into a public key and a private key. When you contact the website, it gives you a certificate that contains the public key, and your browser then performs a series of cryptographic operations against it to (a) verify that the remote site possesses the private key (which prevents someone just copying the certificate to another system and pretending to be the legitimate site), and (b) generate an ephemeral encryption key that's used to actually encrypt the traffic between your browser and the site. But what stops an attacker from simply giving you a fake certificate that contains their public key? The certificate is itself signed by a certificate authority (CA), and your browser is configured to trust a preconfigured set of CAs. CAs will not give someone a signed certificate unless they prove they have legitimate ownership of the site in question, so (in theory) an attacker will never be able to obtain a fake certificate for a legitimate site.

This infrastructure is used for pretty much every protocol that can use TLS, including things like SMTP and IMAP. But SSH doesn't use TLS, and doesn't participate in any of this infrastructure. Instead, SSH tends to take a "Trust on First Use" (TOFU) model - the first time you ssh into a server, you receive a prompt asking you whether you trust its public key, and then you probably hit the "Yes" button and get on with your life. This works fine up until the point where the key changes, and SSH suddenly starts complaining that there's a mismatch and something awful could be happening (like someone intercepting your traffic and directing it to their own server with their own keys). Users are then supposed to verify whether this change is legitimate, and if so remove the old keys and add the new ones. This is tedious and risks users just saying "Yes" again, and if it happens too often an attacker can simply redirect target users to their own server and through sheer fatigue at dealing with this crap the user will probably trust the malicious server.

Why not certificates? OpenSSH actually does support certificates, but not in the way you might expect. There's a custom format that's significantly less complicated than the X509 certificate format used in TLS. Basically, an SSH certificate just contains a public key, a list of hostnames it's good for, and a signature from a CA. There's no pre-existing set of trusted CAs, so anyone could generate a certificate that claims it's valid for, say, github.com. This isn't really a problem, though, because right now nothing pays attention to SSH host certificates unless there's some manual configuration.

(It's actually possible to glue the general PKI infrastructure into SSH certificates. Please do not do this)

So let's look at what happened in the Github case. The first question is "How could the private key have been somewhere that could be committed to a repository in the first place?". I have no unique insight into what happened at Github, so this is conjecture, but I'm reasonably confident in it. Github deals with a large number of transactions per second. Github.com is not a single computer - it's a large number of machines. All of those need to have access to the same private key, because otherwise git would complain that the private key had changed whenever it connected to a machine with a different private key (the alternative would be to use a different IP address for every frontend server, but that would instead force users to repeatedly accept additional keys every time they connect to a new IP address). Something needs to be responsible for deploying that private key to new systems as they're brought up, which means there's ample opportunity for it to accidentally end up in the wrong place.

Now, best practices suggest that this should be avoided by simply placing the private key in a hardware module that performs the cryptographic operations, ensuring that nobody can ever get at the private key. The problem faced here is that HSMs typically aren't going to be fast enough to handle the number of requests per second that Github deals with. This can be avoided by using something like a Nitro Enclave, but you're still going to need a bunch of these in different geographic locales because otherwise your front ends are still going to be limited by the need to talk to an enclave on the other side of the planet, and now you're still having to deal with distributing the private key to a bunch of systems.

What if we could have the best of both worlds - the performance of private keys that just happily live on the servers, and the security of private keys that live in HSMs? Unsurprisingly, we can! The SSH private key could be deployed to every front end server, but every minute it could call out to an HSM-backed service and request a new SSH host certificate signed by a private key in the HSM. If clients are configured to trust the key that's signing the certificates, then it doesn't matter what the private key on the servers is - the client will see that there's a valid certificate and will trust the key, even if it changes. Restricting the validity of the certificate to a small window of time means that if a key is compromised an attacker can't do much with it - the moment you become aware of that you stop signing new certificates, and once all the existing ones expire the old private key becomes useless. You roll out a new private key with new certificates signed by the same CA and clients just carry on trusting it without any manual involvement.

Why don't we have this already? The main problem is that client tooling just doesn't handle this well. OpenSSH has no way to do TOFU for CAs, just the keys themselves. This means there's no way to do a git clone ssh://git@github.com/whatever and get a prompt asking you to trust Github's CA. Instead, you need to add a @cert-authority github.com (key) line to your known_hosts file by hand, and since approximately nobody's going to do that there's only marginal benefit in going to the effort to implement this infrastructure. The most important thing we can do to improve the security of the SSH ecosystem is to make it easier to use certificates, and that means improving the behaviour of the clients.

It should be noted that certificates aren't the only approach to handling key migration. OpenSSH supports a protocol for key rotation, basically by allowing the server to provide a set of multiple trusted keys that the client can cache, and then invalidating old ones. Unfortunately this still requires that the "new" private keys be deployed in the same way as the old ones, so any screwup that results in one private key being leaked may well also result in the additional keys being leaked. I prefer the certificate approach.

Finally, I've seen a couple of people imply that the blame here should be attached to whoever or whatever caused the private key to be committed to a repository in the first place. This is a terrible take. Humans will make mistakes, and your systems should be resilient against that. There's no individual at fault here - there's a series of design decisions that made it possible for a bad outcome to occur, and in a better universe they wouldn't have been necessary. Let's work on building that better universe.

comment count unavailable comments

22 March 2023

Shirish Agarwal: Anti-national says the Indian Law Minister.

Anti-national and Anti-India judges Kiren Rijiju, Law Minister.
For those who can t see the above poster says the following A handful of retired Supreme Court judges who are part of Anti-India and are trying to make Indian judiciary play role of the Opposition party. Law Minister Kiren Rijiju. Now, just to give bit more of a context, the above has happened as the CJI (Chief Justice has not been listening or toeing their line)
The above is a statement given by CJI DY Chandrachud. He says and I quote Democracy needs truth to survive. Democracy and truth go hand in hand. Speaking truth to power is a right of every citizen in a democracy. It is equally a duty.
Another quote by him. I am personally averse to sealed covers. There has to be transparency in Court. This is about implementing the orders. What can be secrecy here. Now I need to again give context of the various statements given. The law minister who gave this statement, his name incidentally came first under the radar sometime in 2013-2014 in a list given/shared by Union Home Secretary R K Singh (then in UPA, now in BJP) as an anti-India China sympathizer. Of course today all sorts of thugs use the brand nationalism he is one of them. Now as far as the CJI is concerned, AFAIK I know he bent over backwards for them but they were not pleased with him. The latest sealed cover statement is because BJP wanted to put a sealed cover in the ongoing Udhav Thackeray vs Eknath Shinde case where BJP or the Eknath Shinde faction tried to give a sealed cover. The problem with sealed cover is for any defence there is nothing to fight against it. It could be a blank paper, it could be whole lot of gossip or innuendo, unless it s out in the open the prosecution in this case i.e. Udhav Thackeray team could not effectively fight as they do not know the contents of the said sealed cover. This goes against all judicial norms. The U.S. tried with sealed cover and ended the practise only after couple of cases. Even in the OROP case (One Rank One Pension) the Govt. tried the same trick. As I have shared before, this actually comes Senator Joseph Mccarthy who started this whole thing in 1950 s, a bit of background on him. I probably had shared about him before but it bears to know and remember again and again. The same is what Mr. Trump has done time and again. It s a similar script. Bully your opponents and use sealed cover as no questions can be asked about it. The good news though is that the views of CJI have been changing. So, yes they would like to change the CJI, they actually have been trying to have their own man have keys to the judiciary but without success so far, soon though this bastion would also fall, if not today then tomorrow. The EC (Election Commission) has been thoroughly compromised. Would not share more as that EC s bending acts would use a whole blog post or two to share the numerous instances where BJP has been given all rights. In fact, via RTI it came to be known that many of the BJP leaders had given false statements about their education in the EC affidavit. That alone should have been grounds for throwing out the legislators but in their wisdom they see fit to remain blind. The latest case is of Mr. Nishikant Dubey. Of course, even with the legal documents shared in public domain and EC having powers to verify such documents, they are choosing to remain silent. If one has lied about one thing in an affidavit, what or how many lies he has shared in that same document. And how are you supposed to trust anything that comes out of his mouth. This is the state of not just Mr. Dubey but a whole lot of people who are in BJP today. And EC as an institution seems to be let down. There was a time when it was lead by T.N. Seshan who was courageous, fearless and fair. He later went to become Speaker but even then he was harsh but fair. Perhaps people thought we will always have people like him. Hence instead of strong institutions having strong rules, we based our trust on people and hence we are where we are  There is much to share but some other day, Till later.

20 March 2023

Freexian Collaborators: Monthly report about Debian Long Term Support, February 2023 (by LTS Team)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In February, 15 contributors have been paid to work on Debian LTS, their reports are available:
  • Adrian Bunk did 22.0h (out of 32.25h assigned), thus carrying over 10.25h to the next month.
  • Anton Gladky did 9.75h (out of 11.5h assigned and 3.5h from previous period), thus carrying over 5.25h to the next month.
  • Ben Hutchings did 8.0h (out of 8.0h assigned and 16.0h from previous period), thus carrying over 16.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Emilio Pozuelo Monfort did 26.25h (out of 0h assigned and 35.0h from previous period), thus carrying over 8.75h to the next month.
  • Guilhem Moulin did 20.0h (out of 20.0h assigned).
  • Helmut Grohne did 5.0h (out of 5.0h assigned and 5.0h from previous period), thus carrying over 5.0h to the next month.
  • Lee Garrett did 26.75h (out of 19.75h assigned and 12.5h from previous period), thus carrying over 5.5h to the next month.
  • Markus Koschany did 32.25h (out of 32.25h assigned).
  • Ola Lundqvist did 11.5h (out of 12.5h assigned and 11.5h from previous period), thus carrying over 12.5h to the next month.
  • Roberto C. S nchez did 5.0h (out of 9.5h assigned and 22.5h from previous period), thus carrying over 27.0h to the next month.
  • Sylvain Beucler did 32.0h (out of 17.25h assigned and 15.0h from previous period), thus carrying over 0.25h to the next month.
  • Thorsten Alteholz did 8.0h (out of 14.0h assigned), thus carrying over 6.0h to the next month.
  • Tobias Frost did 16.0h (out of 16.0h assigned).
  • Utkarsh Gupta did 24.25h (out of 49.25h assigned), thus carrying over 8.0h to the next month.

Evolution of the situation In February, we have released 44 DLAs, which resolved 156 CVEs. We are glad to welcome some new contributors who will hopefully help us fix CVEs in the supported release even faster. However, we also experienced some setbacks as a few sponsors have stopped (or decreased) their support. If your company ever hesitated to sponsor Debian LTS, now might be a good time to join to ensure that we can continue this important work without having to scale down on the number of packages that we are able to support.

Thanks to our sponsors Sponsors that joined recently are in bold.

19 March 2023

Russ Allbery: Review: Allow Me to Retort

Review: Allow Me to Retort, by Elie Mystal
Publisher: The New Press
Copyright: 2022
ISBN: 1-62097-690-0
Format: Kindle
Pages: 257
If you're familiar with Elie Mystal's previous work (writer for The Nation, previously editor for Above the Law, Twitter gadfly, and occasional talking head on news commentary programs), you'll have a good idea what to expect from this book: pointed liberal commentary, frequently developing into rants once he works up a head of steam. The subtitle of A Black Guy's Guide to the Constitution tells you that the topic is US constitutional law, which is very on brand. You're going to get succinct and uncompromising opinions at the intersection of law and politics. If you agree with them, you'll probably find them funny; if you disagree with them, you'll probably find them infuriating. In other words, Elie Mystal is the sort of writer one reads less for "huh, I disagreed with you but that's a good argument" and more for "yeah, you tell 'em, Elie!" I will be very surprised if this book changes anyone's mind about a significant political debate. I'm not sure if people who disagree are even in the intended audience. I'm leery of this sort of book. Usually its function is to feed confirmation bias with some witty rejoinders and put-downs that only sound persuasive to people who already agree with them. If I want that, I can just read Twitter (and you will be unsurprised to know that Mystal has nearly 500,000 Twitter followers). This style can also be boring at book length if the author is repeating variations on a theme. There is indeed a lot of that here, particularly in the first part of this book. If you don't generally agree with Mystal already, save yourself the annoyance and avoid this like the plague. It's just going to make you mad, and I don't think you're going to get anything useful out of it. But as I got deeper into this book, I think Mystal has another, more interesting purpose that's aimed at people who do largely agree. He's trying to undermine a very common US attitude (even on the left) about the US constitution. I don't know if most people from the US (particularly if they're white and male) realize quite how insufferably smug we tend to be about the US constitution. When you grow up here, the paeans to the constitution and the Founding Fathers (always capitalized like deities) are so ubiquitous and unremarked that it's difficult not to absorb them at a subconscious level. There is a national mythology about the greatness of our charter of government that crosses most political divides. In its modern form, this comes with some acknowledgment that some of its original provisions (the notorious three-fifths of a person clause, for instance) were bad, but we subsequently fixed them and everything is good now. Nearly everyone gets taught this in school, and it's almost never challenged. Even the edifices of the US left, such as the ACLU and the NAACP, tend to wrap themselves in the constitution. It's an enlightening experience to watch someone from the US corner a European with a discussion of the US constitution and watch the European plan escape routes while their soul attempts to leave their body. And I think it's telling that having that experience, as rare as it might be given how oblivious we can be, is still more common than a white person having a frank conversation with a black person in the US about the merits of the constitution as written. For various reasons, mostly because this is not very safe for the black person, this rarely happens. This book is primarily Mystal giving his opinion on various current controversies in constitutional law, but the underlying refrain is that the constitution is a trash document written by awful people that sets up a bad political system. That system has been aggressively defended by reactionary Supreme Courts, which along with the designed difficulty of the amendment process has prevented fixing many obviously broken parts. This in turn has led to numerous informal workarounds and elaborate "interpretations" to attempt to make the system vaguely functional. In other words, Mystal is trying to tell the US reader to stop being so precious about this specific document, and is using its truly egregious treatment of black people as the main fulcrum for his argument. Along the way, he gives an abbreviated tour of the highlights of constitutional law, but if you're at all interested in politics you've probably heard most of that before. The main point, I think, is to dig up any reverence left over from a US education, haul it out into the light of day, and compare it to the obvious failures of the constitution as a body of law and the moral failings of its authors. Mystal then asks exactly why we should care about original intent or be so reluctant to change the resulting system of government. (Did I mention you should not bother with this book if you don't agree with Mystal politically? Seriously, don't do that to yourself.) Readers of my reviews will know that I'm fairly far to the left politically, particularly by US standards, and yet I found it fascinating how much lingering reverence Mystal managed to dig out of me while reading this book. I found myself getting defensive in places, which is absurd because I didn't write this document. But I grew up surrounded by nigh-universal social signaling that the US constitution was the greatest political document ever, and in a religious tradition that often argued that it was divinely inspired. If one is exposed to enough of this, it becomes part of your background understanding of the world. Sometimes it takes someone being deliberately provocative to haul it back up to the surface where it can be examined. This book is not solely a psychological intervention in national mythology. Mystal gets into detailed legal arguments as well. I thought the most interesting was the argument that the bizarre and unconvincing "penumbras" and "emanations" reasoning in Griswold v. Connecticut (which later served as the basis of Roe v. Wade) was in part because the Lochner era Supreme Court had, in the course of trying to strike down all worker protection laws, abused the concept of substantive due process so badly that Douglas was unwilling to use it in the majority opinion and instead made up entirely new law. Mystal argues that the Supreme Court should have instead tackled the true meaning of substantive due process head-on and decided Griswold on 14th Amendment equal protection and substantive due process grounds. This is probably a well-known argument in legal circles, but I'd not run into it before (and Mystal makes it far more interesting and entertaining than my summary). Mystal also joins the tradition of thinking of the Reconstruction Amendments (the 13th, 14th, and 15th amendments passed after the Civil War) as a second revolution and an attempt to write a substantially new constitution on different legal principles, an attempt that subsequently failed in the face of concerted and deadly reactionary backlash. I first encountered this perspective via Jamelle Bouie, and it added a lot to my understanding of Reconstruction to see it as a political fight about the foundational principles of US government in addition to a fight over continuing racism in the US south. Maybe I was unusually ignorant of it (I know I need to read W.E.B. DuBois), but I think this line of reasoning doesn't get enough attention in popular media. Mystal provides a good introduction. But, that being said, Allow Me to Retort is more of a vibes book than an argument. As in his other writing, Mystal focuses on what he sees as the core of a controversy and doesn't sweat the details too much. I felt like he was less trying to convince me and more trying to model a different way of thinking and talking about constitutional law that isn't deferential to ideas that are not worthy of deference. He presents his own legal analysis and possible solutions to current US political challenges, but I don't think the specific policy proposals are the strong part of this book. The point, instead, is to embrace a vigorous politics based on a modern understanding of equality, democracy, and human rights, without a lingering reverence for people who mostly didn't believe in any of those things. The role of the constitution in that politics is a flawed tool rather than a sacred text. I think this book is best thought of as an internal argument in the US left. That argument is entirely within the frame of the US legal tradition, so if you're not in the US, it will be of academic interest at best (and probably not even that). If you're on the US right, Mystal offers lots of provocative pull quotes to enjoy getting outraged over, but he provides that service on Twitter for free. But if you are on the US left, I think Allow Me to Retort is worth more consideration than I'd originally given it. There's something here about how we engage with our legal history, and while Mystal's approach is messy, maybe that's the only way you can get at something that's more emotion than logic. In some places it degenerates into a Twitter rant, but Mystal is usually entertaining even when he's ranting. I'm not sorry I read it. Rating: 7 out of 10

16 March 2023

Valhalla's Things: Swiss Embroidery Princess Petticoat

Posted on March 16, 2023
a person wearing a blue sleeveless fitted dress with calf-length skirt; there are small ruffles on the armscyes and the hem, and white lace on the collar and just above the hem ruffle, and small white buttons on a partial placket down the center front. A few years ago a friend told me that her usual fabric shop was closing down and had a sale on all remaining stock. While being sad for yet another brick and mortar shop that was going to be missed (at least it was because the owners were retiring, not because it wasn t sustainable anymore), of course I couldn t miss the opportunity. So we drove a few hundred km, had some nice time with a friend that (because of said few hundred km) we rarely see, and spent a few hours looting the corps er helping the shop owner getting rid of stock before their retirement. A surprisingly small pile of fabric; everything is blue or black. Among other things there was a cut of lightweight swiss embroidery cotton in blue which may or may not have been enthusiastically grabbed with plans of victorian underwear. It was too nice to be buried under layers and layers of fabric (and I suspect that the embroidery wouldn t feel great directly on the skin under a corset), so the natural fit was something at the corset cover layer, and the fabric was enough for a combination garment of the kind often worn in the later Victorian age to prevent the accumulation of bulk at the waist. It also has the nice advantage that in this time of corrupted morals it is perfectly suitable as outerwear as a nice summer dress. Then life happened, the fabric remained in my stash for a long while, but finally this year I have a good late victorian block that I can adapt, and with spring coming it was a good time to start working on the summer wardrobe. scan from a vintage book with the pattern for a tight fitting jacket. The block I ve used comes from The Cutters Practical Guide to the Cutting of Ladies Garments and is for a jacket, rather than a bodice, but the bodice block from the same book had a 4 part back, which was too much for this garment. I reduced the ease around the bust a bit, which I believe worked just fine. The main pattern was easy enough to prepare, I just had to add skirt panels with a straight side towards the front and flaring out towards the back, and I did a quick mockup from an old sheet to check the fit (good) and the swish and volume of the skirt (just right at the first attempt!). The mockup was also used to get an idea of a few possible necklines, and I opted for a relatively deep V, and a front opening with a partial placket down to halfway between the waist and the hips. I also opted for a self-fabric ruffle at the hem and armscyes. same dress, same person, from the side, with one hand in the pocket slit. The only design choice left was the pocket situation: I wanted to wear this garment both as underwear (where pockets aren t needed, and add unwanted bulk) and outerwear (where no pockets is not an option), and the fabric felt too thin to support the weight of the contents of a full pocket. So I decided to add slits into the seams, with just a modesty placket, and wear pockets under the dress as needed. I decided to put the slits between the side and side back panels for two reasons: one is that this way the pockets can sit towards the back, where the fullness of the skirt is supposed to be, rather than under the flat front, and the other one is to keep the seams around the front panel clean, since they are the first ones to be changed when altering a garment for fit. For the same reason, I didn t trim the excess allowance from that seam: it means that it is a bit more bulky, but the fabric is thin enought that it s not really noticeable, and it gives an additional cm for future alterations. Then, as the garment was getting close to being finished I was measuring and storing some old cotton lace I had received as a gift, and there was a length of relatively small lace, and the finish on the neckline was pretty simple and called for embellishment, and who am I to deny embellishment to victorian inspired clothing? A ruffle pleated into a receiving tuck, each pleat is fixed with a pin, and there are a lot of pins. First I had to finish attaching the ruffles, however, and this is when I cursed myself for not using the ruffler foot I have (it would have meant not having selvedges on all seams of the ruffle), and for pleating the ruffle rather than gathering it (I prefer the look of handsewn gathers, but here I m sewing everything by machine, and that s faster, right? (it probably wasn t)). A metal box full of straight pins. Also, this is where I started to get low on pins, and I had to use the ones from the vintage1 box I ve been keeping as decoration in the sewing room. A few long sessions of pinning later, the ruffle was sewn and I could add the lace; I used white thread so that it would be hidden on the right side, but easily visible inside the garment in case I ll decide to remove or change it later. A few buttons and buttonholes later, the garment was ready, and the only thing left was to edit the step-by-step pictures and publish the pattern: it s now available as #FreeSoftWear on my patterns website. And Of course, I had to do a proper swish test of the finished dress with the ruffle, and I m happy to announce that it was fully passed. a person spinning on herself, the skirt and the ruffle are swishing out. Something in the pocket worn under the dress is causing a bit of bulge on one side. Except, maybe I shouldn t carry heavy items in my pockets when doing it? Oh, well. I have other plans for the same pattern, but they involve making some crochet lace, so I expect I can aim at making them wearable in summer 2024. Now I just have to wait for the weather to be a bit warmer, and then I can start enjoing this one.

  1. ok, even more vintage, since my usual pins come from a plastic box that has been probably bought in the 1980s.

15 March 2023

Emanuele Rocca: Disposable Debian VMs with debvm

Some notes on using debvm, an amazing piece of software I ve started using only recently.
Create a new virtual machine:
$ debvm-create
You now have a virtual machine with Debian Sid of your host native architecture (probably amd64). The image file is called rootfs.ext4. You ve got 1G of disk space in the VM.
You can now just run the VM! You will be automatically logged is as root.
$ debvm-run
Experiment in the VM, run all the sort of tests you have in mind. For example, one thing I commonly do is verifying that things work in a clean environment, as opposed to "on my machine".
If anything goes wrong, or if you just want to repeat the test: shutdown the guest, remove rootfs.ext4, and start again with debvm-create.
Now, especially if you intend creating and recreating VMs multiple times, it helps to use a local apt mirror as a cache to avoid fetching all the required packages from the internet over and over again. Install apt-cacher-ng on your host machine and point debvm-create at it:
$ debvm-create -- http://10.0.3.1:3142/debian
The additional options after -- are passed to mmdebstrap. In this case we re specifying http://10.0.3.1:3142/debian as the URL of our local apt mirror. It is going to be used both for image creation and as the only entry in /etc/apt/sources.list on the guest. This is the reason for using 10.0.3.1, the IP address of the lxcbr0 interface used by qemu, instead of 127.0.0.1: to make sure that the guest VM has access to it too.
For a slightly more advanced example, we now want to:
  • run a arm64 VM
  • have more disk space availably, say 2G
  • install additional packages (curl and locales)
  • allow SSH as root with the given public keys, in this example the authorized_keys installed on the host
  • start the VM in the background with no console output
$ debvm-create -a arm64 -o sid-arm64.ext4 -z 2G -k ~/.ssh/authorized_keys -- http://10.0.3.1:3142/debian --include curl,locales
Start the VM:
$ debvm-run -i sid-arm64.ext4 -s 2222 -g -- -display none &
SSH into the guest:
$ ssh -o NoHostAuthenticationForLocalhost=yes -p 2222 root@127.0.0.1
Enjoy!

13 March 2023

Russell Coker: Xmpp Tools

For a while I ve had my monitoring systems alert me via XMPP (Jabber). To do that I used the sendxmpp command-line program which worked well for it s basic tasks. I recently noticed that my laptop and workstation which I had upgraded to Debian/Testing weren t sending messages, I m not sure when it started as my main monitoring of such machines is to touch a key and see if there s a response if I m not at the keyboard then a failure doesn t bother me too much. I ve filed Debian bug #1032868 [1] about this. As sendxmpp is apparently not supported upstream and we are preparing for a release it could be that the next version of Debian is released without this working (if it s specific to talking to Prosody) or without sendxmpp (if it fails on all Jabber servers). I next tested xmppc which doesn t send messages (gives no error when I have apparently correct parameters and just doesn t send anything) and doesn t display any text output for info related commands while not giving error messages or an error return code. I filed Debian bug #1032869 [2] about this. Currently the only success I ve found with Debian/Testing for this is with go-sendxmpp. To configure that you setup a file named ~/.config/go-sendxmpp/config with the following contents:
username: JABBER-ID
password: PASSWORD
Go-sendxmpp can take a username and password on the command-line but that s bad for security as in the absence of SE Linux or other advanced security systems the password can be seen by any user on the same system who runs ps. To send a message run echo $MESSAGE go-sendxmpp $ADDR to send $MESSAGE to $ADDR. It also has the option go-sendxmpp -l to listen for incoming messages. I don t have an immediate need to receive messages from the command-line but it s handy to have the option. I probably won t be able to get a new version of etbemon in Debian for the Bookworm release. So to get go-sendxmpp to work with etbemon you need to edit /usr/lib/mon/alert.d/mailxmpp.alert and change this sendxmpp line to this go-sendxmpp line:
open (XMPP, "  /usr/bin/sendxmpp -a /etc/ssl/certs -t @xmpprec -r $host")  
open (XMPP, "  /usr/bin/go-sendxmpp @xmpprec")  

Antoine Beaupr : Framework 12th gen laptop review

The Framework is a 13.5" laptop body with swappable parts, which makes it somewhat future-proof and certainly easily repairable, scoring an "exceedingly rare" 10/10 score from ifixit.com. There are two generations of the laptop's main board (both compatible with the same body): the Intel 11th and 12th gen chipsets. I have received my Framework, 12th generation "DIY", device in late September 2022 and will update this page as I go along in the process of ordering, burning-in, setting up and using the device over the years. Overall, the Framework is a good laptop. I like the keyboard, the touch pad, the expansion cards. Clearly there's been some good work done on industrial design, and it's the most repairable laptop I've had in years. Time will tell, but it looks sturdy enough to survive me many years as well. This is also one of the most powerful devices I ever lay my hands on. I have managed, remotely, more powerful servers, but this is the fastest computer I have ever owned, and it fits in this tiny case. It is an amazing machine. On the downside, there's a bit of proprietary firmware required (WiFi, Bluetooth, some graphics) and the Framework ships with a proprietary BIOS, with currently no Coreboot support. Expect to need the latest kernel, firmware, and hacking around a bunch of things to get resolution and keybindings working right. Like others, I have first found significant power management issues, but many issues can actually be solved with some configuration. Some of the expansion ports (HDMI, DP, MicroSD, and SSD) use power when idle, so don't expect week-long suspend, or "full day" battery while those are plugged in. Finally, the expansion ports are nice, but there's only four of them. If you plan to have a two-monitor setup, you're likely going to need a dock. Read on for the detailed review. For context, I'm moving from the Purism Librem 13v4 because it basically exploded on me. I had, in the meantime, reverted back to an old ThinkPad X220, so I sometimes compare the Framework with that venerable laptop as well. This blog post has been maturing for months now. It started in September 2022 and I declared it completed in March 2023. It's the longest single article on this entire website, currently clocking at about 13,000 words. It will take an average reader a full hour to go through this thing, so I don't expect anyone to actually do that. This introduction should be good enough for most people, read the first section if you intend to actually buy a Framework. Jump around the table of contents as you see fit for after you did buy the laptop, as it might include some crucial hints on how to make it work best for you, especially on (Debian) Linux.

Advice for buyers Those are things I wish I would have known before buying:
  1. consider buying 4 USB-C expansion cards, or at least a mix of 4 USB-A or USB-C cards, as they use less power than other cards and you do want to fill those expansion slots otherwise they snag around and feel insecure
  2. you will likely need a dock or at least a USB hub if you want a two-monitor setup, otherwise you'll run out of ports
  3. you have to do some serious tuning to get proper (10h+ idle, 10 days suspend) power savings
  4. in particular, beware that the HDMI, DisplayPort and particularly the SSD and MicroSD cards take a significant amount power, even when sleeping, up to 2-6W for the latter two
  5. beware that the MicroSD card is what it says: Micro, normal SD cards won't fit, and while there might be full sized one eventually, it's currently only at the prototyping stage
  6. the Framework monitor has an unusual aspect ratio (3:2): I like it (and it matches classic and digital photography aspect ratio), but it might surprise you

Current status I have the framework! It's setup with a fresh new Debian bookworm installation. I've ran through a large number of tests and burn in. I have decided to use the Framework as my daily driver, and had to buy a USB-C dock to get my two monitors connected, which was own adventure. Update: Framework just (2023-03-23) just announced a whole bunch of new stuff: The recording is available in this video and it's not your typical keynote. It starts ~25 minutes late, audio is crap, lightning and camera are crap, clapping seems to be from whatever staff they managed to get together in a room, decor is bizarre, colors are shit. It's amazing.

Specifications Those are the specifications of the 12th gen, in general terms. Your build will of course vary according to your needs.
  • CPU: i5-1240P, i7-1260P, or i7-1280P (Up to 4.4-4.8 GHz, 4+8 cores), Iris Xe graphics
  • Storage: 250-4000GB NVMe (or bring your own)
  • Memory: 8-64GB DDR4-3200 (or bring your own)
  • WiFi 6e (AX210, vPro optional, or bring your own)
  • 296.63mm X 228.98mm X 15.85mm, 1.3Kg
  • 13.5" display, 3:2 ratio, 2256px X 1504px, 100% sRGB, >400 nit
  • 4 x USB-C user-selectable expansion ports, including
    • USB-C
    • USB-A
    • HDMI
    • DP
    • Ethernet
    • MicroSD
    • 250-1000GB SSD
  • 3.5mm combo headphone jack
  • Kill switches for microphone and camera
  • Battery: 55Wh
  • Camera: 1080p 60fps
  • Biometrics: Fingerprint Reader
  • Backlit keyboard
  • Power Adapter: 60W USB-C (or bring your own)
  • ships with a screwdriver/spludger
  • 1 year warranty
  • base price: 1000$CAD, but doesn't give you much, typical builds around 1500-2000$CAD

Actual build This is the actual build I ordered. Amounts in CAD. (1CAD = ~0.75EUR/USD.)

Base configuration
  • CPU: Intel Core i5-1240P (AKA Alder Lake P 8 4.4GHz P-threads, 8 3.2GHz E-threads, 16 total, 28-64W), 1079$
  • Memory: 16GB (1 x 16GB) DDR4-3200, 104$

Customization
  • Keyboard: US English, included

Expansion Cards
  • 2 USB-C $24
  • 3 USB-A $36
  • 2 HDMI $50
  • 1 DP $50
  • 1 MicroSD $25
  • 1 Storage 1TB $199
  • Sub-total: 384$

Accessories
  • Power Adapter - US/Canada $64.00

Total
  • Before tax: 1606$
  • After tax and duties: 1847$
  • Free shipping

Quick evaluation This is basically the TL;DR: here, just focusing on broad pros/cons of the laptop.

Pros

Cons
  • the 11th gen is out of stock, except for the higher-end CPUs, which are much less affordable (700$+)
  • the 12th gen has compatibility issues with Debian, followup in the DebianOn page, but basically: brightness hotkeys, power management, wifi, the webcam is okay even though the chipset is the infamous alder lake because it does not have the fancy camera; most issues currently seem solvable, and upstream is working with mainline to get their shit working
  • 12th gen might have issues with thunderbolt docks
  • they used to have some difficulty keeping up with the orders: first two batches shipped, third batch sold out, fourth batch should have shipped (?) in October 2021. they generally seem to keep up with shipping. update (august 2022): they rolled out a second line of laptops (12th gen), first batch shipped, second batch shipped late, September 2022 batch was generally on time, see this spreadsheet for a crowdsourced effort to track those supply chain issues seem to be under control as of early 2023. I got the Ethernet expansion card shipped within a week.
  • compared to my previous laptop (Purism Librem 13v4), it feels strangely bulkier and heavier; it's actually lighter than the purism (1.3kg vs 1.4kg) and thinner (15.85mm vs 18mm) but the design of the Purism laptop (tapered edges) makes it feel thinner
  • no space for a 2.5" drive
  • rather bright LED around power button, but can be dimmed in the BIOS (not low enough to my taste) I got used to it
  • fan quiet when idle, but can be noisy when running, for example if you max a CPU for a while
  • battery described as "mediocre" by Ars Technica (above), confirmed poor in my tests (see below)
  • no RJ-45 port, and attempts at designing ones are failing because the modular plugs are too thin to fit (according to Linux After Dark), so unlikely to have one in the future Update: they cracked that nut and ship an 2.5 gbps Ethernet expansion card with a realtek chipset, without any firmware blob (!)
  • a bit pricey for the performance, especially when compared to the competition (e.g. Dell XPS, Apple M1)
  • 12th gen Intel has glitchy graphics, seems like Intel hasn't fully landed proper Linux support for that chipset yet

Initial hardware setup A breeze.

Accessing the board The internals are accessed through five TorX screws, but there's a nice screwdriver/spudger that works well enough. The screws actually hold in place so you can't even lose them. The first setup is a bit counter-intuitive coming from the Librem laptop, as I expected the back cover to lift and give me access to the internals. But instead the screws is release the keyboard and touch pad assembly, so you actually need to flip the laptop back upright and lift the assembly off (!) to get access to the internals. Kind of scary. I also actually unplugged a connector in lifting the assembly because I lifted it towards the monitor, while you actually need to lift it to the right. Thankfully, the connector didn't break, it just snapped off and I could plug it back in, no harm done. Once there, everything is well indicated, with QR codes all over the place supposedly leading to online instructions.

Bad QR codes Unfortunately, the QR codes I tested (in the expansion card slot, the memory slot and CPU slots) did not actually work so I wonder how useful those actually are. After all, they need to point to something and that means a URL, a running website that will answer those requests forever. I bet those will break sooner than later and in fact, as far as I can tell, they just don't work at all. I prefer the approach taken by the MNT reform here which designed (with the 100 rabbits folks) an actual paper handbook (PDF). The first QR code that's immediately visible from the back of the laptop, in an expansion cord slot, is a 404. It seems to be some serial number URL, but I can't actually tell because, well, the page is a 404. I was expecting that bar code to lead me to an introduction page, something like "how to setup your Framework laptop". Support actually confirmed that it should point a quickstart guide. But in a bizarre twist, they somehow sent me the URL with the plus (+) signs escaped, like this:
https://guides.frame.work/Guide/Framework\+Laptop\+DIY\+Edition\+Quick\+Start\+Guide/57
... which Firefox immediately transforms in:
https://guides.frame.work/Guide/Framework/+Laptop/+DIY/+Edition/+Quick/+Start/+Guide/57
I'm puzzled as to why they would send the URL that way, the proper URL is of course:
https://guides.frame.work/Guide/Framework+Laptop+DIY+Edition+Quick+Start+Guide/57
(They have also "let the team know about this for feedback and help resolve the problem with the link" which is a support code word for "ha-ha! nope! not my problem right now!" Trust me, I know, my own code word is "can you please make a ticket?")

Seating disks and memory The "DIY" kit doesn't actually have that much of a setup. If you bought RAM, it's shipped outside the laptop in a little plastic case, so you just seat it in as usual. Then you insert your NVMe drive, and, if that's your fancy, you also install your own mPCI WiFi card. If you ordered one (which was my case), it's pre-installed. Closing the laptop is also kind of amazing, because the keyboard assembly snaps into place with magnets. I have actually used the laptop with the keyboard unscrewed as I was putting the drives in and out, and it actually works fine (and will probably void your warranty, so don't do that). (But you can.) (But don't, really.)

Hardware review

Keyboard and touch pad The keyboard feels nice, for a laptop. I'm used to mechanical keyboard and I'm rather violent with those poor things. Yet the key travel is nice and it's clickety enough that I don't feel too disoriented. At first, I felt the keyboard as being more laggy than my normal workstation setup, but it turned out this was a graphics driver issues. After enabling a composition manager, everything feels snappy. The touch pad feels good. The double-finger scroll works well enough, and I don't have to wonder too much where the middle button is, it just works. Taps don't work, out of the box: that needs to be enabled in Xorg, with something like this:
cat > /etc/X11/xorg.conf.d/40-libinput.conf <<EOF
Section "InputClass"
      Identifier "libinput touch pad catchall"
      MatchIsTouchpad "on"
      MatchDevicePath "/dev/input/event*"
      Driver "libinput"
      Option "Tapping" "on"
      Option "TappingButtonMap" "lmr"
EndSection
EOF
But be aware that once you enable that tapping, you'll need to deal with palm detection... So I have not actually enabled this in the end.

Power button The power button is a little dangerous. It's quite easy to hit, as it's right next to one expansion card where you are likely to plug in a cable power. And because the expansion cards are kind of hard to remove, you might squeeze the laptop (and the power key) when trying to remove the expansion card next to the power button. So obviously, don't do that. But that's not very helpful. An alternative is to make the power button do something else. With systemd-managed systems, it's actually quite easy. Add a HandlePowerKey stanza to (say) /etc/systemd/logind.conf.d/power-suspends.conf:
[Login]
HandlePowerKey=suspend
HandlePowerKeyLongPress=poweroff
You might have to create the directory first:
mkdir /etc/systemd/logind.conf.d/
Then restart logind:
systemctl restart systemd-logind
And the power button will suspend! Long-press to power off doesn't actually work as the laptop immediately suspends... Note that there's probably half a dozen other ways of doing this, see this, this, or that.

Special keybindings There is a series of "hidden" (as in: not labeled on the key) keybindings related to the fn keybinding that I actually find quite useful.
Key Equivalent Effect Command
p Pause lock screen xset s activate
b Break ? ?
k ScrLk switch keyboard layout N/A
It looks like those are defined in the microcontroller so it would be possible to add some. For example, the SysRq key is almost bound to fn s in there. Note that most other shortcuts like this are clearly documented (volume, brightness, etc). One key that's less obvious is F12 that only has the Framework logo on it. That actually calls the keysym XF86AudioMedia which, interestingly, does absolutely nothing here. By default, on Windows, it opens your browser to the Framework website and, on Linux, your "default media player". The keyboard backlight can be cycled with fn-space. The dimmer version is dim enough, and the keybinding is easy to find in the dark. A skinny elephant would be performed with alt PrtScr (above F11) KEY, so for example alt fn F11 b should do a hard reset. This comment suggests you need to hold the fn only if "function lock" is on, but that's actually the opposite of my experience. Out of the box, some of the fn keys don't work. Mute, volume up/down, brightness, monitor changes, and the airplane mode key all do basically nothing. They don't send proper keysyms to Xorg at all. This is a known problem and it's related to the fact that the laptop has light sensors to adjust the brightness automatically. Somehow some of those keys (e.g. the brightness controls) are supposed to show up as a different input device, but don't seem to work correctly. It seems like the solution is for the Framework team to write a driver specifically for this, but so far no progress since July 2022. In the meantime, the fancy functionality can be supposedly disabled with:
echo 'blacklist hid_sensor_hub'   sudo tee /etc/modprobe.d/framework-als-blacklist.conf
... and a reboot. This solution is also documented in the upstream guide. Note that there's another solution flying around that fixes this by changing permissions on the input device but I haven't tested that or seen confirmation it works.

Kill switches The Framework has two "kill switches": one for the camera and the other for the microphone. The camera one actually disconnects the USB device when turned off, and the mic one seems to cut the circuit. It doesn't show up as muted, it just stops feeding the sound. Both kill switches are around the main camera, on top of the monitor, and quite discreet. Then turn "red" when enabled (i.e. "red" means "turned off").

Monitor The monitor looks pretty good to my untrained eyes. I have yet to do photography work on it, but some photos I looked at look sharp and the colors are bright and lively. The blacks are dark and the screen is bright. I have yet to use it in full sunlight. The dimmed light is very dim, which I like.

Screen backlight I bind brightness keys to xbacklight in i3, but out of the box I get this error:
sep 29 22:09:14 angela i3[5661]: No outputs have backlight property
It just requires this blob in /etc/X11/xorg.conf.d/backlight.conf:
Section "Device"
    Identifier  "Card0"
    Driver      "intel"
    Option      "Backlight"  "intel_backlight"
EndSection
This way I can control the actual backlight power with the brightness keys, and they do significantly reduce power usage.

Multiple monitor support I have been able to hook up my two old monitors to the HDMI and DisplayPort expansion cards on the laptop. The lid closes without suspending the machine, and everything works great. I actually run out of ports, even with a 4-port USB-A hub, which gives me a total of 7 ports:
  1. power (USB-C)
  2. monitor 1 (DisplayPort)
  3. monitor 2 (HDMI)
  4. USB-A hub, which adds:
  5. keyboard (USB-A)
  6. mouse (USB-A)
  7. Yubikey
  8. external sound card
Now the latter, I might be able to get rid of if I switch to a combo-jack headset, which I do have (and still need to test). But still, this is a problem. I'll probably need a powered USB-C dock and better monitors, possibly with some Thunderbolt chaining, to save yet more ports. But that means more money into this setup, argh. And figuring out my monitor situation is the kind of thing I'm not that big of a fan of. And neither is shopping for USB-C (or is it Thunderbolt?) hubs. My normal autorandr setup doesn't work: I have tried saving a profile and it doesn't get autodetected, so I also first need to do:
autorandr -l framework-external-dual-lg-acer
The magic:
autorandr -l horizontal
... also works well. The worst problem with those monitors right now is that they have a radically smaller resolution than the main screen on the laptop, which means I need to reset the font scaling to normal every time I switch back and forth between those monitors and the laptop, which means I actually need to do this:
autorandr -l horizontal &&
eho Xft.dpi: 96   xrdb -merge &&
systemctl restart terminal xcolortaillog background-image emacs &&
i3-msg restart
Kind of disruptive.

Expansion ports I ordered a total of 10 expansion ports. I did manage to initialize the 1TB drive as an encrypted storage, mostly to keep photos as this is something that takes a massive amount of space (500GB and counting) and that I (unfortunately) don't work on very often (but still carry around). The expansion ports are fancy and nice, but not actually that convenient. They're a bit hard to take out: you really need to crimp your fingernails on there and pull hard to take them out. There's a little button next to them to release, I think, but at first it feels a little scary to pull those pucks out of there. You get used to it though, and it's one of those things you can do without looking eventually. There's only four expansion ports. Once you have two monitors, the drive, and power plugged in, bam, you're out of ports; there's nowhere to plug my Yubikey. So if this is going to be my daily driver, with a dual monitor setup, I will need a dock, which means more crap firmware and uncertainty, which isn't great. There are actually plans to make a dual-USB card, but that is blocked on designing an actual board for this. I can't wait to see more expansion ports produced. There's a ethernet expansion card which quickly went out of stock basically the day it was announced, but was eventually restocked. I would like to see a proper SD-card reader. There's a MicroSD card reader, but that obviously doesn't work for normal SD cards, which would be more broadly compatible anyways (because you can have a MicroSD to SD card adapter, but I have never heard of the reverse). Someone actually found a SD card reader that fits and then someone else managed to cram it in a 3D printed case, which is kind of amazing. Still, I really like that idea that I can carry all those little adapters in a pouch when I travel and can basically do anything I want. It does mean I need to shuffle through them to find the right one which is a little annoying. I have an elastic band to keep them lined up so that all the ports show the same side, to make it easier to find the right one. But that quickly gets undone and instead I have a pouch full of expansion cards. Another awesome thing with the expansion cards is that they don't just work on the laptop: anything that takes USB-C can take those cards, which means you can use it to connect an SD card to your phone, for backups, for example. Heck, you could even connect an external display to your phone that way, assuming that's supported by your phone of course (and it probably isn't). The expansion ports do take up some power, even when idle. See the power management section below, and particularly the power usage tests for details.

USB-C charging One thing that is really a game changer for me is USB-C charging. It's hard to overstate how convenient this is. I often have a USB-C cable lying around to charge my phone, and I can just grab that thing and pop it in my laptop. And while it will obviously not charge as fast as the provided charger, it will stop draining the battery at least. (As I wrote this, I had the laptop plugged in the Samsung charger that came with a phone, and it was telling me it would take 6 hours to charge the remaining 15%. With the provided charger, that flew down to 15 minutes. Similarly, I can power the laptop from the power grommet on my desk, reducing clutter as I have that single wire out there instead of the bulky power adapter.) I also really like the idea that I can charge my laptop with a power bank or, heck, with my phone, if push comes to shove. (And vice-versa!) This is awesome. And it works from any of the expansion ports, of course. There's a little led next to the expansion ports as well, which indicate the charge status:
  • red/amber: charging
  • white: charged
  • off: unplugged
I couldn't find documentation about this, but the forum answered. This is something of a recurring theme with the Framework. While it has a good knowledge base and repair/setup guides (and the forum is awesome) but it doesn't have a good "owner manual" that shows you the different parts of the laptop and what they do. Again, something the MNT reform did well. Another thing that people are asking about is an external sleep indicator: because the power LED is on the main keyboard assembly, you don't actually see whether the device is active or not when the lid is closed. Finally, I wondered what happens when you plug in multiple power sources and it turns out the charge controller is actually pretty smart: it will pick the best power source and use it. The only downside is it can't use multiple power sources, but that seems like a bit much to ask.

Multimedia and other devices Those things also work:
  • webcam: splendid, best webcam I've ever had (but my standards are really low)
  • onboard mic: works well, good gain (maybe a bit much)
  • onboard speakers: sound okay, a little metal-ish, loud enough to be annoying, see this thread for benchmarks, apparently pretty good speakers
  • combo jack: works, with slight hiss, see below
There's also a light sensor, but it conflicts with the keyboard brightness controls (see above). There's also an accelerometer, but it's off by default and will be removed from future builds.

Combo jack mic tests The Framework laptop ships with a combo jack on the left side, which allows you to plug in a CTIA (source) headset. In human terms, it's a device that has both a stereo output and a mono input, typically a headset or ear buds with a microphone somewhere. It works, which is better than the Purism (which only had audio out), but is on par for the course for that kind of onboard hardware. Because of electrical interference, such sound cards very often get lots of noise from the board. With a Jabra Evolve 40, the built-in USB sound card generates basically zero noise on silence (invisible down to -60dB in Audacity) while plugging it in directly generates a solid -30dB hiss. There is a noise-reduction system in that sound card, but the difference is still quite striking. On a comparable setup (curie, a 2017 Intel NUC), there is also a his with the Jabra headset, but it's quieter, more in the order of -40/-50 dB, a noticeable difference. Interestingly, testing with my Mee Audio Pro M6 earbuds leads to a little more hiss on curie, more on the -35/-40 dB range, close to the Framework. Also note that another sound card, the Antlion USB adapter that comes with the ModMic 4, also gives me pretty close to silence on a quiet recording, picking up less than -50dB of background noise. It's actually probably picking up the fans in the office, which do make audible noises. In other words, the hiss of the sound card built in the Framework laptop is so loud that it makes more noise than the quiet fans in the office. Or, another way to put it is that two USB sound cards (the Jabra and the Antlion) are able to pick up ambient noise in my office but not the Framework laptop. See also my audio page.

Performance tests

Compiling Linux 5.19.11 On a single core, compiling the Debian version of the Linux kernel takes around 100 minutes:
5411.85user 673.33system 1:37:46elapsed 103%CPU (0avgtext+0avgdata 831700maxresident)k
10594704inputs+87448000outputs (9131major+410636783minor)pagefaults 0swaps
This was using 16 watts of power, with full screen brightness. With all 16 cores (make -j16), it takes less than 25 minutes:
19251.06user 2467.47system 24:13.07elapsed 1494%CPU (0avgtext+0avgdata 831676maxresident)k
8321856inputs+87427848outputs (30792major+409145263minor)pagefaults 0swaps
I had to plug the normal power supply after a few minutes because battery would actually run out using my desk's power grommet (34 watts). During compilation, fans were spinning really hard, quite noisy, but not painfully so. The laptop was sucking 55 watts of power, steadily:
  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Average  87.9   0.0  10.7   1.4   0.1 17.8 6583.6 5054.3 233.0 223.9 233.1  55.96
 GeoMean  87.9   0.0  10.6   1.2   0.0 17.6 6427.8 5048.1 227.6 218.7 227.7  55.96
  StdDev   1.4   0.0   1.2   0.6   0.2  3.0 1436.8  255.5 50.0 47.5 49.7   0.20
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Minimum  85.0   0.0   7.8   0.5   0.0 13.0 3594.0 4638.0 117.0 111.0 120.0  55.52
 Maximum  90.8   0.0  12.9   3.5   0.8 38.0 10174.0 5901.0 374.0 362.0 375.0  56.41
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
CPU:  55.96 Watts on average with standard deviation 0.20
Note: power read from RAPL domains: package-0, uncore, package-0, core, psys.
These readings do not cover all the hardware in this device.

memtest86+ I ran Memtest86+ v6.00b3. It shows something like this:
Memtest86+ v6.00b3        12th Gen Intel(R) Core(TM) i5-1240P
CLK/Temp: 2112MHz    78/78 C   Pass  2% #
L1 Cache:   48KB    414 GB/s   Test 46% ##################
L2 Cache: 1.25MB    118 GB/s   Test #3 [Moving inversions, 1s & 0s] 
L3 Cache:   12MB     43 GB/s   Testing: 16GB - 18GB [1GB of 15.7GB]
Memory  :  15.7GB  14.9 GB/s   Pattern: 
--------------------------------------------------------------------------------
CPU: 4P+8E-Cores (16T)    SMP: 8T (PAR))    Time:  0:27:23  Status: Pass     \
RAM: 1600MHz (DDR4-3200) CAS 22-22-22-51    Pass:  1        Errors: 0
--------------------------------------------------------------------------------
Memory SPD Information
----------------------
 - Slot 2: 16GB DDR-4-3200 - Crucial CT16G4SFRA32A.C16FP (2022-W23)
                          Framework FRANMACP04
 <ESC> Exit  <F1> Configuration  <Space> Scroll Lock            6.00.unknown.x64
So about 30 minutes for a full 16GB memory test.

Software setup Once I had everything in the hardware setup, I figured, voil , I'm done, I'm just going to boot this beautiful machine and I can get back to work. I don't understand why I am so na ve some times. It's mind boggling. Obviously, it didn't happen that way at all, and I spent the best of the three following days tinkering with the laptop.

Secure boot and EFI First, I couldn't boot off of the NVMe drive I transferred from the previous laptop (the Purism) and the BIOS was not very helpful: it was just complaining about not finding any boot device, without dropping me in the real BIOS. At first, I thought it was a problem with my NVMe drive, because it's not listed in the compatible SSD drives from upstream. But I figured out how to enter BIOS (press F2 manically, of course), which showed the NVMe drive was actually detected. It just didn't boot, because it was an old (2010!!) Debian install without EFI. So from there, I disabled secure boot, and booted a grml image to try to recover. And by "boot" I mean, I managed to get to the grml boot loader which promptly failed to load its own root file system somehow. I still have to investigate exactly what happened there, but it failed some time after the initrd load with:
Unable to find medium containing a live file system
This, it turns out, was fixed in Debian lately, so a daily GRML build will not have this problems. The upcoming 2022 release (likely 2022.10 or 2022.11) will also get the fix. I did manage to boot the development version of the Debian installer which was a surprisingly good experience: it mounted the encrypted drives and did everything pretty smoothly. It even offered me to reinstall the boot loader, but that ultimately (and correctly, as it turns out) failed because I didn't have a /boot/efi partition. At this point, I realized there was no easy way out of this, and I just proceeded to completely reinstall Debian. I had a spare NVMe drive lying around (backups FTW!) so I just swapped that in, rebooted in the Debian installer, and did a clean install. I wanted to switch to bookworm anyways, so I guess that's done too.

Storage limitations Another thing that happened during setup is that I tried to copy over the internal 2.5" SSD drive from the Purism to the Framework 1TB expansion card. There's no 2.5" slot in the new laptop, so that's pretty much the only option for storage expansion. I was tired and did something wrong. I ended up wiping the partition table on the original 2.5" drive. Oops. It might be recoverable, but just restoring the partition table didn't work either, so I'm not sure how I recover the data there. Normally, everything on my laptops and workstations is designed to be disposable, so that wasn't that big of a problem. I did manage to recover most of the data thanks to git-annex reinit, but that was a little hairy.

Bootstrapping Puppet Once I had some networking, I had to install all the packages I needed. The time I spent setting up my workstations with Puppet has finally paid off. What I actually did was to restore two critical directories:
/etc/ssh
/var/lib/puppet
So that I would keep the previous machine's identity. That way I could contact the Puppet server and install whatever was missing. I used my Puppet optimization trick to do a batch install and then I had a good base setup, although not exactly as it was before. 1700 packages were installed manually on angela before the reinstall, and not in Puppet. I did not inspect each one individually, but I did go through /etc and copied over more SSH keys, for backups and SMTP over SSH.

LVFS support It looks like there's support for the (de-facto) standard LVFS firmware update system. At least I was able to update the UEFI firmware with a simple:
apt install fwupd-amd64-signed
fwupdmgr refresh
fwupdmgr get-updates
fwupdmgr update
Nice. The 12th gen BIOS updates, currently (January 2023) beta, can be deployed through LVFS with:
fwupdmgr enable-remote lvfs-testing
echo 'DisableCapsuleUpdateOnDisk=true' >> /etc/fwupd/uefi_capsule.conf 
fwupdmgr update
Those instructions come from the beta forum post. I performed the BIOS update on 2023-01-16T16:00-0500.

Resolution tweaks The Framework laptop resolution (2256px X 1504px) is big enough to give you a pretty small font size, so welcome to the marvelous world of "scaling". The Debian wiki page has a few tricks for this.

Console This will make the console and grub fonts more readable:
cat >> /etc/default/console-setup <<EOF
FONTFACE="Terminus"
FONTSIZE=32x16
EOF
echo GRUB_GFXMODE=1024x768 >> /etc/default/grub
update-grub

Xorg Adding this to your .Xresources will make everything look much bigger:
! 1.5*96
Xft.dpi: 144
Apparently, some of this can also help:
! These might also be useful depending on your monitor and personal preference:
Xft.autohint: 0
Xft.lcdfilter:  lcddefault
Xft.hintstyle:  hintfull
Xft.hinting: 1
Xft.antialias: 1
Xft.rgba: rgb
It my experience it also makes things look a little fuzzier, which is frustrating because you have this awesome monitor but everything looks out of focus. Just bumping Xft.dpi by a 1.5 factor looks good to me. The Debian Wiki has a page on HiDPI, but it's not as good as the Arch Wiki, where the above blurb comes from. I am not using the latter because I suspect it's causing some of the "fuzziness". TODO: find the equivalent of this GNOME hack in i3? (gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"), taken from this Framework guide

Issues

BIOS configuration The Framework BIOS has some minor issues. One issue I personally encountered is that I had disabled Quick boot and Quiet boot in the BIOS to diagnose the above boot issues. This, in turn, triggers a bug where the BIOS boot manager (F12) would just hang completely. It would also fail to boot from an external USB drive. The current fix (as of BIOS 3.03) is to re-enable both Quick boot and Quiet boot. Presumably this is something that will get fixed in a future BIOS update. Note that the following keybindings are active in the BIOS POST check:
Key Meaning
F2 Enter BIOS setup menu
F12 Enter BIOS boot manager
Delete Enter BIOS setup menu

WiFi compatibility issues I couldn't make WiFi work at first. Obviously, the default Debian installer doesn't ship with proprietary firmware (although that might change soon) so the WiFi card didn't work out of the box. But even after copying the firmware through a USB stick, I couldn't quite manage to find the right combination of ip/iw/wpa-supplicant (yes, after repeatedly copying a bunch more packages over to get those bootstrapped). (Next time I should probably try something like this post.) Thankfully, I had a little USB-C dongle with a RJ-45 jack lying around. That also required a firmware blob, but it was a single package to copy over, and with that loaded, I had network. Eventually, I did managed to make WiFi work; the problem was more on the side of "I forgot how to configure a WPA network by hand from the commandline" than anything else. NetworkManager worked fine and got WiFi working correctly. Note that this is with Debian bookworm, which has the 5.19 Linux kernel, and with the firmware-nonfree (firmware-iwlwifi, specifically) package.

Battery life I was having between about 7 hours of battery on the Purism Librem 13v4, and that's after a year or two of battery life. Now, I still have about 7 hours of battery life, which is nicer than my old ThinkPad X220 (20 minutes!) but really, it's not that good for a new generation laptop. The 12th generation Intel chipset probably improved things compared to the previous one Framework laptop, but I don't have a 11th gen Framework to compare with). (Note that those are estimates from my status bar, not wall clock measurements. They should still be comparable between the Purism and Framework, that said.) The battery life doesn't seem up to, say, Dell XPS 13, ThinkPad X1, and of course not the Apple M1, where I would expect 10+ hours of battery life out of the box. That said, I do get those kind estimates when the machine is fully charged and idle. In fact, when everything is quiet and nothing is plugged in, I get dozens of hours of battery life estimated (I've seen 25h!). So power usage fluctuates quite a bit depending on usage, which I guess is expected. Concretely, so far, light web browsing, reading emails and writing notes in Emacs (e.g. this file) takes about 8W of power:
Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Average   1.7   0.0   0.5  97.6   0.2  1.2 4684.9 1985.2 126.6 39.1 128.0   7.57
 GeoMean   1.4   0.0   0.4  97.6   0.1  1.2 4416.6 1734.5 111.6 27.9 113.3   7.54
  StdDev   1.0   0.2   0.2   1.2   0.0  0.5 1584.7 1058.3 82.1 44.0 80.2   0.71
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Minimum   0.2   0.0   0.2  94.9   0.1  1.0 2242.0  698.2 82.0 17.0 82.0   6.36
 Maximum   4.1   1.1   1.0  99.4   0.2  3.0 8687.4 4445.1 463.0 249.0 449.0   9.10
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
System:   7.57 Watts on average with standard deviation 0.71
Expansion cards matter a lot in the battery life (see below for a thorough discussion), my normal setup is 2xUSB-C and 1xUSB-A (yes, with an empty slot, and yes, to save power). Interestingly, playing a video in a (720p) window in a window takes up more power (10.5W) than in full screen (9.5W) but I blame that on my desktop setup (i3 + compton)... Not sure if mpv hits the VA-API, maybe not in windowed mode. Similar results with 1080p, interestingly, except the window struggles to keep up altogether. Full screen playback takes a relatively comfortable 9.5W, which means a solid 5h+ of playback, which is fine by me. Fooling around the web, small edits, youtube-dl, and I'm at around 80% battery after about an hour, with an estimated 5h left, which is a little disappointing. I had a 7h remaining estimate before I started goofing around Discourse, so I suspect the website is a pretty big battery drain, actually. I see about 10-12 W, while I was probably at half that (6-8W) just playing music with mpv in the background... In other words, it looks like editing posts in Discourse with Firefox takes a solid 4-6W of power. Amazing and gross. (When writing about abusive power usage generates more power usage, is that an heisenbug? Or schr dinbug?)

Power management Compared to the Purism Librem 13v4, the ongoing power usage seems to be slightly better. An anecdotal metric is that the Purism would take 800mA idle, while the more powerful Framework manages a little over 500mA as I'm typing this, fluctuating between 450 and 600mA. That is without any active expansion card, except the storage. Those numbers come from the output of tlp-stat -b and, unfortunately, the "ampere" unit makes it quite hard to compare those, because voltage is not necessarily the same between the two platforms.
  • TODO: review Arch Linux's tips on power saving
  • TODO: i915 driver has a lot of parameters, including some about power saving, see, again, the arch wiki, and particularly enable_fbc=1
TL:DR; power management on the laptop is an issue, but there's various tweaks you can make to improve it. Try:
  • powertop --auto-tune
  • apt install tlp && systemctl enable tlp
  • nvme.noacpi=1 mem_sleep_default=deep on the kernel command line may help with standby power usage
  • keep only USB-C expansion cards plugged in, all others suck power even when idle
  • consider upgrading the BIOS to latest beta (3.06 at the time of writing), unverified power savings
  • latest Linux kernels (6.2) promise power savings as well (unverified)
Update: also try to follow the official optimization guide. It was made for Ubuntu but will probably also work for your distribution of choice with a few tweaks. They recommend using tlpui but it's not packaged in Debian. There is, however, a Flatpak release. In my case, it resulted in the following diff to tlp.conf: tlp.patch.

Background on CPU architecture There were power problems in the 11th gen Framework laptop, according to this report from Linux After Dark, so the issues with power management on the Framework are not new. The 12th generation Intel CPU (AKA "Alder Lake") is a big-little architecture with "power-saving" and "performance" cores. There used to be performance problems introduced by the scheduler in Linux 5.16 but those were eventually fixed in 5.18, which uses Intel's hardware as an "intelligent, low-latency hardware-assisted scheduler". According to Phoronix, the 5.19 release improved the power saving, at the cost of some penalty cost. There were also patch series to make the scheduler configurable, but it doesn't look those have been merged as of 5.19. There was also a session about this at the 2022 Linux Plumbers, but they stopped short of talking more about the specific problems Linux is facing in Alder lake:
Specifically, the kernel's energy-aware scheduling heuristics don't work well on those CPUs. A number of features present there complicate the energy picture; these include SMT, Intel's "turbo boost" mode, and the CPU's internal power-management mechanisms. For many workloads, running on an ostensibly more power-hungry Pcore can be more efficient than using an Ecore. Time for discussion of the problem was lacking, though, and the session came to a close.
All this to say that the 12gen Intel line shipped with this Framework series should have better power management thanks to its power-saving cores. And Linux has had the scheduler changes to make use of this (but maybe is still having trouble). In any case, this might not be the source of power management problems on my laptop, quite the opposite. Also note that the firmware updates for various chipsets are supposed to improve things eventually. On the other hand, The Verge simply declared the whole P-series a mistake...

Attempts at improving power usage I did try to follow some of the tips in this forum post. The tricks powertop --auto-tune and tlp's PCIE_ASPM_ON_BAT=powersupersave basically did nothing: I was stuck at 10W power usage in powertop (600+mA in tlp-stat). Apparently, I should be able to reach the C8 CPU power state (or even C9, C10) in powertop, but I seem to be stock at C7. (Although I'm not sure how to read that tab in powertop: in the Core(HW) column there's only C3/C6/C7 states, and most cores are 85% in C7 or maybe C6. But the next column over does show many CPUs in C10 states... As it turns out, the graphics card actually takes up a good chunk of power unless proper power management is enabled (see below). After tweaking this, I did manage to get down to around 7W power usage in powertop. Expansion cards actually do take up power, and so does the screen, obviously. The fully-lit screen takes a solid 2-3W of power compared to the fully dimmed screen. When removing all expansion cards and making the laptop idle, I can spin it down to 4 watts power usage at the moment, and an amazing 2 watts when the screen turned off.

Caveats Abusive (10W+) power usage that I initially found could be a problem with my desktop configuration: I have this silly status bar that updates every second and probably causes redraws... The CPU certainly doesn't seem to spin down below 1GHz. Also note that this is with an actual desktop running with everything: it could very well be that some things (I'm looking at you Signal Desktop) take up unreasonable amount of power on their own (hello, 1W/electron, sheesh). Syncthing and containerd (Docker!) also seem to take a good 500mW just sitting there. Beyond my desktop configuration, this could, of course, be a Debian-specific problem; your favorite distribution might be better at power management.

Idle power usage tests Some expansion cards waste energy, even when unused. Here is a summary of the findings from the powerstat page. I also include other devices tested in this page for completeness:
Device Minimum Average Max Stdev Note
Screen, 100% 2.4W 2.6W 2.8W N/A
Screen, 1% 30mW 140mW 250mW N/A
Backlight 1 290mW ? ? ? fairly small, all things considered
Backlight 2 890mW 1.2W 3W? 460mW? geometric progression
Backlight 3 1.69W 1.5W 1.8W? 390mW? significant power use
Radios 100mW 250mW N/A N/A
USB-C N/A N/A N/A N/A negligible power drain
USB-A 10mW 10mW ? 10mW almost negligible
DisplayPort 300mW 390mW 600mW N/A not passive
HDMI 380mW 440mW 1W? 20mW not passive
1TB SSD 1.65W 1.79W 2W 12mW significant, probably higher when busy
MicroSD 1.6W 3W 6W 1.93W highest power usage, possibly even higher when busy
Ethernet 1.69W 1.64W 1.76W N/A comparable to the SSD card
So it looks like all expansion cards but the USB-C ones are active, i.e. they draw power with idle. The USB-A cards are the least concern, sucking out 10mW, pretty much within the margin of error. But both the DisplayPort and HDMI do take a few hundred miliwatts. It looks like USB-A connectors have this fundamental flaw that they necessarily draw some powers because they lack the power negotiation features of USB-C. At least according to this post:
It seems the USB A must have power going to it all the time, that the old USB 2 and 3 protocols, the USB C only provides power when there is a connection. Old versus new.
Apparently, this is a problem specific to the USB-C to USB-A adapter that ships with the Framework. Some people have actually changed their orders to all USB-C because of this problem, but I'm not sure the problem is as serious as claimed in the forums. I couldn't reproduce the "one watt" power drains suggested elsewhere, at least not repeatedly. (A previous version of this post did show such a power drain, but it was in a less controlled test environment than the series of more rigorous tests above.) The worst offenders are the storage cards: the SSD drive takes at least one watt of power and the MicroSD card seems to want to take all the way up to 6 watts of power, both just sitting there doing nothing. This confirms claims of 1.4W for the SSD (but not 5W) power usage found elsewhere. The former post has instructions on how to disable the card in software. The MicroSD card has been reported as using 2 watts, but I've seen it as high as 6 watts, which is pretty damning. The Framework team has a beta update for the DisplayPort adapter but currently only for Windows (LVFS technically possible, "under investigation"). A USB-A firmware update is also under investigation. It is therefore likely at least some of those power management issues will eventually be fixed. Note that the upcoming Ethernet card has a reported 2-8W power usage, depending on traffic. I did my own power usage tests in powerstat-wayland and they seem lower than 2W. The upcoming 6.2 Linux kernel might also improve battery usage when idle, see this Phoronix article for details, likely in early 2023.

Idle power usage tests under Wayland Update: I redid those tests under Wayland, see powerstat-wayland for details. The TL;DR: is that power consumption is either smaller or similar.

Idle power usage tests, 3.06 beta BIOS I redid the idle tests after the 3.06 beta BIOS update and ended up with this results:
Device Minimum Average Max Stdev Note
Baseline 1.96W 2.01W 2.11W 30mW 1 USB-C, screen off, backlight off, no radios
2 USB-C 1.95W 2.16W 3.69W 430mW USB-C confirmed as mostly passive...
3 USB-C 1.95W 2.16W 3.69W 430mW ... although with extra stdev
1TB SSD 3.72W 3.85W 4.62W 200mW unchanged from before upgrade
1 USB-A 1.97W 2.18W 4.02W 530mW unchanged
2 USB-A 1.97W 2.00W 2.08W 30mW unchanged
3 USB-A 1.94W 1.99W 2.03W 20mW unchanged
MicroSD w/o card 3.54W 3.58W 3.71W 40mW significant improvement! 2-3W power saving!
MicroSD w/ card 3.53W 3.72W 5.23W 370mW new measurement! increased deviation
DisplayPort 2.28W 2.31W 2.37W 20mW unchanged
1 HDMI 2.43W 2.69W 4.53W 460mW unchanged
2 HDMI 2.53W 2.59W 2.67W 30mW unchanged
External USB 3.85W 3.89W 3.94W 30mW new result
Ethernet 3.60W 3.70W 4.91W 230mW unchanged
Note that the table summary is different than the previous table: here we show the absolute numbers while the previous table was doing a confusing attempt at showing relative (to the baseline) numbers. Conclusion: the 3.06 BIOS update did not significantly change idle power usage stats except for the MicroSD card which has significantly improved. The new "external USB" test is also interesting: it shows how the provided 1TB SSD card performs (admirably) compared to existing devices. The other new result is the MicroSD card with a card which, interestingly, uses less power than the 1TB SSD drive.

Standby battery usage I wrote some quick hack to evaluate how much power is used during sleep. Apparently, this is one of the areas that should have improved since the first Framework model, let's find out. My baseline for comparison is the Purism laptop, which, in 10 minutes, went from this:
sep 28 11:19:45 angela systemd-sleep[209379]: /sys/class/power_supply/BAT/charge_now                      =   6045 [mAh]
... to this:
sep 28 11:29:47 angela systemd-sleep[209725]: /sys/class/power_supply/BAT/charge_now                      =   6037 [mAh]
That's 8mAh per 10 minutes (and 2 seconds), or 48mA, or, with this battery, about 127 hours or roughly 5 days of standby. Not bad! In comparison, here is my really old x220, before:
sep 29 22:13:54 emma systemd-sleep[176315]: /sys/class/power_supply/BAT0/energy_now                     =   5070 [mWh]
... after:
sep 29 22:23:54 emma systemd-sleep[176486]: /sys/class/power_supply/BAT0/energy_now                     =   4980 [mWh]
... which is 90 mwH in 10 minutes, or a whopping 540mA, which was possibly okay when this battery was new (62000 mAh, so about 100 hours, or about 5 days), but this battery is almost dead and has only 5210 mAh when full, so only 10 hours standby. And here is the Framework performing a similar test, before:
sep 29 22:27:04 angela systemd-sleep[4515]: /sys/class/power_supply/BAT1/charge_full                    =   3518 [mAh]
sep 29 22:27:04 angela systemd-sleep[4515]: /sys/class/power_supply/BAT1/charge_now                     =   2861 [mAh]
... after:
sep 29 22:37:08 angela systemd-sleep[4743]: /sys/class/power_supply/BAT1/charge_now                     =   2812 [mAh]
... which is 49mAh in a little over 10 minutes (and 4 seconds), or 292mA, much more than the Purism, but half of the X220. At this rate, the battery would last on standby only 12 hours!! That is pretty bad. Note that this was done with the following expansion cards:
  • 2 USB-C
  • 1 1TB SSD drive
  • 1 USB-A with a hub connected to it, with keyboard and LAN
Preliminary tests without the hub (over one minute) show that it doesn't significantly affect this power consumption (300mA). This guide also suggests booting with nvme.noacpi=1 but this still gives me about 5mAh/min (or 300mA). Adding mem_sleep_default=deep to the kernel command line does make a difference. Before:
sep 29 23:03:11 angela systemd-sleep[3699]: /sys/class/power_supply/BAT1/charge_now                     =   2544 [mAh]
... after:
sep 29 23:04:25 angela systemd-sleep[4039]: /sys/class/power_supply/BAT1/charge_now                     =   2542 [mAh]
... which is 2mAh in 74 seconds, which is 97mA, brings us to a more reasonable 36 hours, or a day and a half. It's still above the x220 power usage, and more than an order of magnitude more than the Purism laptop. It's also far from the 0.4% promised by upstream, which would be 14mA for the 3500mAh battery. It should also be noted that this "deep" sleep mode is a little more disruptive than regular sleep. As you can see by the timing, it took more than 10 seconds for the laptop to resume, which feels a little alarming as your banging the keyboard to bring it back to life. You can confirm the current sleep mode with:
# cat /sys/power/mem_sleep
s2idle [deep]
In the above, deep is selected. You can change it on the fly with:
printf s2idle > /sys/power/mem_sleep
Here's another test:
sep 30 22:25:50 angela systemd-sleep[32207]: /sys/class/power_supply/BAT1/charge_now                     =   1619 [mAh]
sep 30 22:31:30 angela systemd-sleep[32516]: /sys/class/power_supply/BAT1/charge_now                     =   1613 [mAh]
... better! 6 mAh in about 6 minutes, works out to 63.5mA, so more than two days standby. A longer test:
oct 01 09:22:56 angela systemd-sleep[62978]: /sys/class/power_supply/BAT1/charge_now                     =   3327 [mAh]
oct 01 12:47:35 angela systemd-sleep[63219]: /sys/class/power_supply/BAT1/charge_now                     =   3147 [mAh]
That's 180mAh in about 3.5h, 52mA! Now at 66h, or almost 3 days. I wasn't sure why I was seeing such fluctuations in those tests, but as it turns out, expansion card power tests show that they do significantly affect power usage, especially the SSD drive, which can take up to two full watts of power even when idle. I didn't control for expansion cards in the above tests running them with whatever card I had plugged in without paying attention so it's likely the cause of the high power usage and fluctuations. It might be possible to work around this problem by disabling USB devices before suspend. TODO. See also this post. In the meantime, I have been able to get much better suspend performance by unplugging all modules. Then I get this result:
oct 04 11:15:38 angela systemd-sleep[257571]: /sys/class/power_supply/BAT1/charge_now                     =   3203 [mAh]
oct 04 15:09:32 angela systemd-sleep[257866]: /sys/class/power_supply/BAT1/charge_now                     =   3145 [mAh]
Which is 14.8mA! Almost exactly the number promised by Framework! With a full battery, that means a 10 days suspend time. This is actually pretty good, and far beyond what I was expecting when starting down this journey. So, once the expansion cards are unplugged, suspend power usage is actually quite reasonable. More detailed standby tests are available in the standby-tests page, with a summary below. There is also some hope that the Chromebook edition specifically designed with a specification of 14 days standby time could bring some firmware improvements back down to the normal line. Some of those issues were reported upstream in April 2022, but there doesn't seem to have been any progress there since. TODO: one final solution here is suspend-then-hibernate, which Windows uses for this TODO: consider implementing the S0ix sleep states , see also troubleshooting TODO: consider https://github.com/intel/pm-graph

Standby expansion cards test results This table is a summary of the more extensive standby-tests I have performed:
Device Wattage Amperage Days Note
baseline 0.25W 16mA 9 sleep=deep nvme.noacpi=1
s2idle 0.29W 18.9mA ~7 sleep=s2idle nvme.noacpi=1
normal nvme 0.31W 20mA ~7 sleep=s2idle without nvme.noacpi=1
1 USB-C 0.23W 15mA ~10
2 USB-C 0.23W 14.9mA same as above
1 USB-A 0.75W 48.7mA 3 +500mW (!!) for the first USB-A card!
2 USB-A 1.11W 72mA 2 +360mW
3 USB-A 1.48W 96mA <2 +370mW
1TB SSD 0.49W 32mA <5 +260mW
MicroSD 0.52W 34mA ~4 +290mW
DisplayPort 0.85W 55mA <3 +620mW (!!)
1 HDMI 0.58W 38mA ~4 +250mW
2 HDMI 0.65W 42mA <4 +70mW (?)
Conclusions:
  • USB-C cards take no extra power on suspend, possibly less than empty slots, more testing required
  • USB-A cards take a lot more power on suspend (300-500mW) than on regular idle (~10mW, almost negligible)
  • 1TB SSD and MicroSD cards seem to take a reasonable amount of power (260-290mW), compared to their runtime equivalents (1-6W!)
  • DisplayPort takes a surprising lot of power (620mW), almost double its average runtime usage (390mW)
  • HDMI cards take, surprisingly, less power (250mW) in standby than the DP card (620mW)
  • and oddly, a second card adds less power usage (70mW?!) than the first, maybe a circuit is used by both?
A discussion of those results is in this forum post.

Standby expansion cards test results, 3.06 beta BIOS Framework recently (2022-11-07) announced that they will publish a firmware upgrade to address some of the USB-C issues, including power management. This could positively affect the above result, improving both standby and runtime power usage. The update came out in December 2022 and I redid my analysis with the following results:
Device Wattage Amperage Days Note
baseline 0.25W 16mA 9 no cards, same as before upgrade
1 USB-C 0.25W 16mA 9 same as before
2 USB-C 0.25W 16mA 9 same
1 USB-A 0.80W 62mA 3 +550mW!! worse than before
2 USB-A 1.12W 73mA <2 +320mW, on top of the above, bad!
Ethernet 0.62W 40mA 3-4 new result, decent
1TB SSD 0.52W 34mA 4 a bit worse than before (+2mA)
MicroSD 0.51W 22mA 4 same
DisplayPort 0.52W 34mA 4+ upgrade improved by 300mW
1 HDMI ? 38mA ? same
2 HDMI ? 45mA ? a bit worse than before (+3mA)
Normal 1.08W 70mA ~2 Ethernet, 2 USB-C, USB-A
Full results in standby-tests-306. The big takeaway for me is that the update did not improve power usage on the USB-A ports which is a big problem for my use case. There is a notable improvement on the DisplayPort power consumption which brings it more in line with the HDMI connector, but it still doesn't properly turn off on suspend either. Even worse, the USB-A ports now sometimes fails to resume after suspend, which is pretty annoying. This is a known problem that will hopefully get fixed in the final release.

Battery wear protection The BIOS has an option to limit charge to 80% to mitigate battery wear. There's a way to control the embedded controller from runtime with fw-ectool, partly documented here. The command would be:
sudo ectool fwchargelimit 80
I looked at building this myself but failed to run it. I opened a RFP in Debian so that we can ship this in Debian, and also documented my work there. Note that there is now a counter that tracks charge/discharge cycles. It's visible in tlp-stat -b, which is a nice improvement:
root@angela:/home/anarcat# tlp-stat -b
--- TLP 1.5.0 --------------------------------------------
+++ Battery Care
Plugin: generic
Supported features: none available
+++ Battery Status: BAT1
/sys/class/power_supply/BAT1/manufacturer                   = NVT
/sys/class/power_supply/BAT1/model_name                     = Framewo
/sys/class/power_supply/BAT1/cycle_count                    =      3
/sys/class/power_supply/BAT1/charge_full_design             =   3572 [mAh]
/sys/class/power_supply/BAT1/charge_full                    =   3541 [mAh]
/sys/class/power_supply/BAT1/charge_now                     =   1625 [mAh]
/sys/class/power_supply/BAT1/current_now                    =    178 [mA]
/sys/class/power_supply/BAT1/status                         = Discharging
/sys/class/power_supply/BAT1/charge_control_start_threshold = (not available)
/sys/class/power_supply/BAT1/charge_control_end_threshold   = (not available)
Charge                                                      =   45.9 [%]
Capacity                                                    =   99.1 [%]
One thing that is still missing is the charge threshold data (the (not available) above). There's been some work to make that accessible in August, stay tuned? This would also make it possible implement hysteresis support.

Ethernet expansion card The Framework ethernet expansion card is a fancy little doodle: "2.5Gbit/s and 10/100/1000Mbit/s Ethernet", the "clear housing lets you peek at the RTL8156 controller that powers it". Which is another way to say "we didn't completely finish prod on this one, so it kind of looks like we 3D-printed this in the shop".... The card is a little bulky, but I guess that's inevitable considering the RJ-45 form factor when compared to the thin Framework laptop. I have had a serious issue when trying it at first: the link LEDs just wouldn't come up. I made a full bug report in the forum and with upstream support, but eventually figured it out on my own. It's (of course) a power saving issue: if you reboot the machine, the links come up when the laptop is running the BIOS POST check and even when the Linux kernel boots. I first thought that the problem is likely related to the powertop service which I run at boot time to tweak some power saving settings. It seems like this:
echo 'on' > '/sys/bus/usb/devices/4-2/power/control'
... is a good workaround to bring the card back online. You can even return to power saving mode and the card will still work:
echo 'auto' > '/sys/bus/usb/devices/4-2/power/control'
Further research by Matt_Hartley from the Framework Team found this issue in the tlp tracker that shows how the USB_AUTOSUSPEND setting enables the power saving even if the driver doesn't support it, which, in retrospect, just sounds like a bad idea. To quote that issue:
By default, USB power saving is active in the kernel, but not force-enabled for incompatible drivers. That is, devices that support suspension will suspend, drivers that do not, will not.
So the fix is actually to uninstall tlp or disable that setting by adding this to /etc/tlp.conf:
USB_AUTOSUSPEND=0
... but that disables auto-suspend on all USB devices, which may hurt other power usage performance. I have found that a a combination of:
USB_AUTOSUSPEND=1
USB_DENYLIST="0bda:8156"
and this on the kernel commandline:
usbcore.quirks=0bda:8156:k
... actually does work correctly. I now have this in my /etc/default/grub.d/framework-tweaks.cfg file:
# net.ifnames=0: normal interface names ffs (e.g. eth0, wlan0, not wlp166
s0)
# nvme.noacpi=1: reduce SSD disk power usage (not working)
# mem_sleep_default=deep: reduce power usage during sleep (not working)
# usbcore.quirk is a workaround for the ethernet card suspend bug: https:
//guides.frame.work/Guide/Fedora+37+Installation+on+the+Framework+Laptop/
108?lang=en
GRUB_CMDLINE_LINUX="net.ifnames=0 nvme.noacpi=1 mem_sleep_default=deep usbcore.quirks=0bda:8156:k"
# fix the resolution in grub for fonts to not be tiny
GRUB_GFXMODE=1024x768
Other than that, I haven't been able to max out the card because I don't have other 2.5Gbit/s equipment at home, which is strangely satisfying. But running against my Turris Omnia router, I could pretty much max a gigabit fairly easily:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   937 Mbits/sec  238             sender
[  5]   0.00-10.00  sec  1.09 GBytes   934 Mbits/sec                  receiver
The card doesn't require any proprietary firmware blobs which is surprising. Other than the power saving issues, it just works. In my power tests (see powerstat-wayland), the Ethernet card seems to use about 1.6W of power idle, without link, in the above "quirky" configuration where the card is functional but without autosuspend.

Proprietary firmware blobs The framework does need proprietary firmware to operate. Specifically:
  • the WiFi network card shipped with the DIY kit is a AX210 card that requires a 5.19 kernel or later, and the firmware-iwlwifi non-free firmware package
  • the Bluetooth adapter also loads the firmware-iwlwifi package (untested)
  • the graphics work out of the box without firmware, but certain power management features come only with special proprietary firmware, normally shipped in the firmware-misc-nonfree but currently missing from the package
Note that, at the time of writing, the latest i915 firmware from linux-firmware has a serious bug where loading all the accessible firmware results in noticeable I estimate 200-500ms lag between the keyboard (not the mouse!) and the display. Symptoms also include tearing and shearing of windows, it's pretty nasty. One workaround is to delete the two affected firmware files:
cd /lib/firmware && rm adlp_guc_70.1.1.bin adlp_guc_69.0.3.bin
update-initramfs -u
You will get the following warning during build, which is good as it means the problematic firmware is disabled:
W: Possible missing firmware /lib/firmware/i915/adlp_guc_69.0.3.bin for module i915
W: Possible missing firmware /lib/firmware/i915/adlp_guc_70.1.1.bin for module i915
But then it also means that critical firmware isn't loaded, which means, among other things, a higher battery drain. I was able to move from 8.5-10W down to the 7W range after making the firmware work properly. This is also after turning the backlight all the way down, as that takes a solid 2-3W in full blast. The proper fix is to use some compositing manager. I ended up using compton with the following systemd unit:
[Unit]
Description=start compositing manager
PartOf=graphical-session.target
ConditionHost=angela
[Service]
Type=exec
ExecStart=compton --show-all-xerrors --backend glx --vsync opengl-swc
Restart=on-failure
[Install]
RequiredBy=graphical-session.target
compton is orphaned however, so you might be tempted to use picom instead, but in my experience the latter uses much more power (1-2W extra, similar experience). I also tried compiz but it would just crash with:
anarcat@angela:~$ compiz --replace
compiz (core) - Warn: No XI2 extension
compiz (core) - Error: Another composite manager is already running on screen: 0
compiz (core) - Fatal: No manageable screens found on display :0
When running from the base session, I would get this instead:
compiz (core) - Warn: No XI2 extension
compiz (core) - Error: Couldn't load plugin 'ccp'
compiz (core) - Error: Couldn't load plugin 'ccp'
Thanks to EmanueleRocca for figuring all that out. See also this discussion about power management on the Framework forum. Note that Wayland environments do not require any special configuration here and actually work better, see my Wayland migration notes for details.
Also note that the iwlwifi firmware also looks incomplete. Even with the package installed, I get those errors in dmesg:
[   19.534429] Intel(R) Wireless WiFi driver for Linux
[   19.534691] iwlwifi 0000:a6:00.0: enabling device (0000 -> 0002)
[   19.541867] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode (-2)
[   19.541881] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode (-2)
[   19.541882] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-72.ucode failed with error -2
[   19.541890] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-71.ucode (-2)
[   19.541895] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-71.ucode (-2)
[   19.541896] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-71.ucode failed with error -2
[   19.541903] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-70.ucode (-2)
[   19.541907] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-70.ucode (-2)
[   19.541908] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-70.ucode failed with error -2
[   19.541913] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-69.ucode (-2)
[   19.541916] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-69.ucode (-2)
[   19.541917] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-69.ucode failed with error -2
[   19.541922] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-68.ucode (-2)
[   19.541926] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-68.ucode (-2)
[   19.541927] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-68.ucode failed with error -2
[   19.541933] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-67.ucode (-2)
[   19.541937] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-67.ucode (-2)
[   19.541937] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-67.ucode failed with error -2
[   19.544244] iwlwifi 0000:a6:00.0: firmware: direct-loading firmware iwlwifi-ty-a0-gf-a0-66.ucode
[   19.544257] iwlwifi 0000:a6:00.0: api flags index 2 larger than supported by driver
[   19.544270] iwlwifi 0000:a6:00.0: TLV_FW_FSEQ_VERSION: FSEQ Version: 0.63.2.1
[   19.544523] iwlwifi 0000:a6:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2)
[   19.544528] iwlwifi 0000:a6:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2)
[   19.544530] iwlwifi 0000:a6:00.0: loaded firmware version 66.55c64978.0 ty-a0-gf-a0-66.ucode op_mode iwlmvm
Some of those are available in the latest upstream firmware package (iwlwifi-ty-a0-gf-a0-71.ucode, -68, and -67), but not all (e.g. iwlwifi-ty-a0-gf-a0-72.ucode is missing) . It's unclear what those do or don't, as the WiFi seems to work well without them. I still copied them in from the latest linux-firmware package in the hope they would help with power management, but I did not notice a change after loading them. There are also multiple knobs on the iwlwifi and iwlmvm drivers. The latter has a power_schmeme setting which defaults to 2 (balanced), setting it to 3 (low power) could improve battery usage as well, in theory. The iwlwifi driver also has power_save (defaults to disabled) and power_level (1-5, defaults to 1) settings. See also the output of modinfo iwlwifi and modinfo iwlmvm for other driver options.

Graphics acceleration After loading the latest upstream firmware and setting up a compositing manager (compton, above), I tested the classic glxgears. Running in a window gives me odd results, as the gears basically grind to a halt:
Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
137 frames in 5.1 seconds = 26.984 FPS
27 frames in 5.4 seconds =  5.022 FPS
Ouch. 5FPS! But interestingly, once the window is in full screen, it does hit the monitor refresh rate:
300 frames in 5.0 seconds = 60.000 FPS
I'm not really a gamer and I'm not normally using any of that fancy graphics acceleration stuff (except maybe my browser does?). I installed intel-gpu-tools for the intel_gpu_top command to confirm the GPU was engaged when doing those simulations. A nice find. Other useful diagnostic tools include glxgears and glxinfo (in mesa-utils) and (vainfo in vainfo). Following to this post, I also made sure to have those settings in my about:config in Firefox, or, in user.js:
user_pref("media.ffmpeg.vaapi.enabled", true);
Note that the guide suggests many other settings to tweak, but those might actually be overkill, see this comment and its parents. I did try forcing hardware acceleration by setting gfx.webrender.all to true, but everything became choppy and weird. The guide also mentions installing the intel-media-driver package, but I could not find that in Debian. The Arch wiki has, as usual, an excellent reference on hardware acceleration in Firefox.

Chromium / Signal desktop bugs It looks like both Chromium and Signal Desktop misbehave with my compositor setup (compton + i3). The fix is to add a persistent flag to Chromium. In Arch, it's conveniently in ~/.config/chromium-flags.conf but that doesn't actually work in Debian. I had to put the flag in /etc/chromium.d/disable-compositing, like this:
export CHROMIUM_FLAGS="$CHROMIUM_FLAGS --disable-gpu-compositing"
It's possible another one of the hundreds of flags might fix this issue better, but I don't really have time to go through this entire, incomplete, and unofficial list (!?!). Signal Desktop is a similar problem, and doesn't reuse those flags (because of course it doesn't). Instead I had to rewrite the wrapper script in /usr/local/bin/signal-desktop to use this instead:
exec /usr/bin/flatpak run --branch=stable --arch=x86_64 org.signal.Signal --disable-gpu-compositing "$@"
This was mostly done in this Puppet commit. I haven't figured out the root of this problem. I did try using picom and xcompmgr; they both suffer from the same issue. Another Debian testing user on Wayland told me they haven't seen this problem, so hopefully this can be fixed by switching to wayland.

Graphics card hangs I believe I might have this bug which results in a total graphical hang for 15-30 seconds. It's fairly rare so it's not too disruptive, but when it does happen, it's pretty alarming. The comments on that bug report are encouraging though: it seems this is a bug in either mesa or the Intel graphics driver, which means many people have this problem so it's likely to be fixed. There's actually a merge request on mesa already (2022-12-29). It could also be that bug because the error message I get is actually:
Jan 20 12:49:10 angela kernel: Asynchronous wait on fence 0000:00:02.0:sway[104431]:cb0ae timed out (hint:intel_atomic_commit_ready [i915]) 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 12:0:00000000 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] Resetting chip for stopped heartbeat on rcs0 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC firmware i915/adlp_guc_70.1.1.bin version 70.1 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] HuC firmware i915/tgl_huc_7.9.3.bin version 7.9 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] HuC authenticated 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC submission enabled 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC SLPC enabled
It's a solid 30 seconds graphical hang. Maybe the keyboard and everything else keeps working. The latter bug report is quite long, with many comments, but this one from January 2023 seems to say that Sway 1.8 fixed the problem. There's also an earlier patch to add an extra kernel parameter that supposedly fixes that too. There's all sorts of other workarounds in there, for example this:
echo "options i915 enable_dc=1 enable_guc_loading=1 enable_guc_submission=1 edp_vswing=0 enable_guc=2 enable_fbc=1 enable_psr=1 disable_power_well=0"   sudo tee /etc/modprobe.d/i915.conf
from this comment... So that one is unsolved, as far as the upstream drivers are concerned, but maybe could be fixed through Sway.

Weird USB hangs / graphical glitches I have had weird connectivity glitches better described in this post, but basically: my USB keyboard and mice (connected over a USB hub) drop keys, lag a lot or hang, and I get visual glitches. The fix was to tighten the screws around the CPU on the motherboard (!), which is, thankfully, a rather simple repair.

USB docks are hell Note that the monitors are hooked up to angela through a USB-C / Thunderbolt dock from Cable Matters, with the lovely name of 201053-SIL. It has issues, see this blog post for an in-depth discussion.

Shipping details I ordered the Framework in August 2022 and received it about a month later, which is sooner than expected because the August batch was late. People (including me) expected this to have an impact on the September batch, but it seems Framework have been able to fix the delivery problems and keep up with the demand. As of early 2023, their website announces that laptops ship "within 5 days". I have myself ordered a few expansion cards in November 2022, and they shipped on the same day, arriving 3-4 days later.

The supply pipeline There are basically 6 steps in the Framework shipping pipeline, each (except the last) accompanied with an email notification:
  1. pre-order
  2. preparing batch
  3. preparing order
  4. payment complete
  5. shipping
  6. (received)
This comes from the crowdsourced spreadsheet, which should be updated when the status changes here. I was part of the "third batch" of the 12th generation laptop, which was supposed to ship in September. It ended up arriving on my door step on September 27th, about 33 days after ordering. It seems current orders are not processed in "batches", but in real time, see this blog post for details on shipping.

Shipping trivia I don't know about the others, but my laptop shipped through no less than four different airplane flights. Here are the hops it took: I can't quite figure out how to calculate exactly how much mileage that is, but it's huge. The ride through Alaska is surprising enough but the bounce back through Winnipeg is especially weird. I guess the route happens that way because of Fedex shipping hubs. There was a related oddity when I had my Purism laptop shipped: it left from the west coast and seemed to enter on an endless, two week long road trip across the continental US.

Other resources

8 March 2023

Joey Hess: the slink and a half boxed set

Today I stumbled upon this youtube video which takes a retrocomputing look at a product I was involved in creating in 1999. It was fascinating looking back at it, and I realized I've never written down how this boxed set of Debian "slink and a half", an unofficial Debian release, came to be. As best I can remember, the CD in that box was Debian 2.1 ("slink") with the linux kernel updated from 2.0 to 2.2. Specifically, it used VA Linux Systems's patched version of the kernel, which supported their hardware better, but also 2.2 generally supported a lot of hardware much better than 2.0. There were some other small modifications that got rolled back into Debian 2.2. I mostly remember updating the installer to support that kernel, and building CD images. Probably over the course of a few weeks. This was the first time I worked on the (old) Debian installer, and the first time I built a Debian CD. I also edited the O'Rielly book that was included in the boxed set. It was wild when pallet loads of these boxed sets showed up. I think they sold for $19.95 at Fry's, although VA Linux Systems also gave lots of them away at conferences.
Watching the video of the installation, I was struck again and again by pain points, which the video does a good job of highlighting. It was a guided tour of everything about Debian that I wanted to fix in 1999. At each pain point I remembered how we fixed it, often years later, after considerable effort. I remembered how the old installer (the boot-floppies) was mostly moribund with only a couple people able and willing to work on it at all. (The video is right to compare its partitioning with old Linux installers from the early 90's because it was a relic from that era!) I remembered designing a new Debian installer that was more modular so more people could get invested in maintaining smaller pieces of it. It was yes, a second system, and developed too slowly, but was intended to withstand the test of time. It mostly has, since it's used to this day. I remembered how partitioning got automated in new Debian installer, by a new "partman" program being contributed by someone I'd never heard of before, obsoleting some previous attempts we'd made (yay modularity). I remembered how I started the os-prober project, which lets the Debian installer add other OS's that are co-installed on the machine to the boot menu. And how that got picked up even outside of Debian, by eg Red Hat. I remembered working on tasksel soon after that project was started, and all the difficult decisions about what tasks to offer and what software it should install. I remembered how the horrible stream of questions from package after package was to deal with, and how I implemented debconf, which tidied that up, integrated it into the installer's UI, made it automatable, and let novices avoid seeing configuration that was intended for experts. And I remembered writing dpkg-reconfigure, so that those configuration choices could be revisited later. It's quite possible I would not have done most of that if VA Linux Systems had not tasked me with making this CD. The thing about releasing something imperfect into the world is you start to feel a responsibility to improve it...
The main critique in the video specific to this boxed set and not to any other Debian release of this era is that this was a single CD, while 2 CDs were needed for all of Debian at the time. And many people had only dialup internet, so would be stuck very slowly downloading any other software they needed. And likewise those free forever upgrades the box promised. Oh the irony: After starting many of those projects, I left VA Linux Systems and the lands of fast internet, and spent 4 years on dialup. Most of that stuff was developed on dialup, though I did have about a year with better internet at the end to put the finishing touches in the new installer that shipped in Debian 3.1. Yes, the dialup apt-gets were excruciatingly slow. But the upgrades were in fact, free forever.
PS: The video's description includes "it would take many years of effort (primarily from Ubuntu) that would help smooth out many of the rough end of this product". All these years later, I do continue to enjoy people involved in Ubuntu downplaying the extent that it was a reskin of my Debian installer shipped on a CD a few months before Debian could get around to shipping it. Like they say, history doesn't repeat, but it does rhyme. PPS: While researching this blog post, I found an even more obscure, and broken, Debian CD was produced by VA Linux in November 1999. Distributed for free at Comdex by the thousands, this CD lacked the Packages file that is necessary for apt-get to use it. I don't know if any versions of that CD still exist. If you find one, email me and I'll send some instructions I wrote up in 1999 to work around the problem.

7 March 2023

Valhalla's Things: Forgotten Yeast Bread

Posted on March 7, 2023
Yesterday around 13:00 I started my usual I m being lazy bread recipe:
  • 400 g flour
  • 250 g water
  • 6 g salt
worked for 8 minutes (by machine), left to rise until about 18:00. For the record, it was a strong flour (310 W), type 1, so white, but somewhat coarsely ground. And then, when it was time to cook bread for dinner I realized that something was missing. Something critical. See if you can spot it in the list above. The yeast. Some bread was taken out of the freezer and defrosted in the oven, but I didn t want to throw away the flour, so I mixed 2-3 g dried yeast, 10 g flour and 10 g water, and left it to rise until after dinner. Then I added it to the dought, added some more water (I know. I should have measured it. I didn t expect having to repeat the thing. It was probably about 20 g), mixed for 5 minutes, covered it to rise. This afternoon, around 15:30, I took the dought, folded it 5-6 times, formed a round loaf on the lined baking tray and left it in the cold oven until 17:45. Then I removed it from the oven, turned it on at 240 C, scored the top of the loaf and sprinkled it with water. When the oven was hot I baked the loaf for 10 minutes at 240 C, then turned it down to 160 C for 20 additional minutes. And then I realized I need to repeat this. No, there are no pictures (there is some left, but it s too dark to take pictures).

6 March 2023

Valhalla's Things: Bookbinding: photo album

Posted on March 6, 2023
an open book with a watercolour of a costume pasted from two corners on one page; near the spine there is a sliver of paper as a spacer. When I paint postcards I tend to start with a draft (usually on lightweight (250 g/m ) watercolour paper, then trace1 the drawing on blank postcards and paint it again. I keep the drafts for a number of reasons; for the views / architectural ones I m using a landscape photo album that I bought many years ago, but lately I ve also sent a few cards with my historical outfits to people who like to be kept updated on that, and I wanted a different book for those, both for better organization and to be able to keep them in the portrait direction. If you know me, you can easily guess that buying one wasn t considered as an option. A closed hardcover book in uniform dark grey. Since I m not going to be writing on the pages, I decided to use a relatively cheap 200 g / m linoprint paper with a nice feel, and I ve settled on a B6 size (before trimming) to hold A6 postcard drafts. For the binding I ve decided to use a technique I ve learned from a craft book ages ago that doesn t use tapes, and added a full hard cover in dark grey linen-feel2 paper. For the end-papers I ve used some random sheets of light blue paper (probably around 100-something g / m ), and that s the thing where I could have done better, but they work. Up to now there isn t anything I hadn t done before, what was new was the fact that this book was meant to hold things between the pages, and I needed to provide space for them. a book seen from the top: near the spine all signatures are made of 4 sheets, but two of them for each signature are just stubs, and leave open spaces between the pages. After looking on the internet for solutions, I settled on adding spacers by making a signature composed of paper - spacer - paper - spacer, with the spacers being 2 cm wide, folded in half. And then, between finishing binding the book and making the cover I utterly forgot to add the head bands. Argh. It s not the first time I make this error. The same book, open on an empty page. I m happy enough with the result. There are things that are easy to improve on in the next iteration (endpapers and head bands), and something in me is not 100% happy with the fact that the spacers aren t placed between every sheet, but there are places with no spacer and places with two of them, but I can t think of (and couldn t find) a way to make them otherwise with a sewn book, unless I sew each individual sheet, which sounds way too bulky (the album I m using for the landscapes was glued, but I didn t really want to go that way). The size is smaller than the other one I was using and doesn t leave a lot of room around the paintings, but that isn t necessarily a bad thing, because it also means less wasted space. I believe that one of my next project will be another similar book in a landscape format, for those postcard drafts that aren t landscapes nor clothing related. And then maybe another? or two? or
Traceback (most recent call last):
TooManyProjectsError: project queue is full

  1. yes, trace. I can t draw. I have too many hobbies to spend the required amount of time every day to practice it. I m going to fake it. 85% of the time I m tracing from a photo I took myself, so I m not even going to consider it cheating.
  2. the description of which, on the online shop, made it look like fabric, even if the price was suspiciously low, so I bought a sheet to see what it was. It wasn t fabric. It feels and looks nice, but I m not sure how sturdy it s going to be.

Next.

Previous.